id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160
values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
1,110,600 | https://en.wikipedia.org/wiki/Oxaloacetic%20acid | Oxaloacetic acid (also known as oxalacetic acid or OAA) is a crystalline organic compound with the chemical formula HO2CC(O)CH2CO2H. Oxaloacetic acid, in the form of its conjugate base oxaloacetate, is a metabolic intermediate in many processes that occur in animals. It takes part in gluconeogenesis, the urea cycle, the glyoxylate cycle, amino acid synthesis, fatty acid synthesis and the citric acid cycle.
Properties
Oxaloacetic acid undergoes successive deprotonations to give the dianion:
HO2CC(O)CH2CO2H −O2CC(O)CH2CO2H + H+, pKa = 2.22
−O2CC(O)CH2CO2H −O2CC(O)CH2CO2− + H+, pKa = 3.89
At high pH, the enolizable proton is ionized:
−O2CC(O)CH2CO2− −O2CC(O−)CHCO2− + H+, pKa = 13.03
The enol forms of oxaloacetic acid are particularly stable. Keto-enol tautomerization is catalyzed by the enzyme oxaloacetate tautomerase. trans-Enol-oxaloacetate also appears when tartrate is the substrate for fumarase.
Biosynthesis
Oxaloacetate forms in several ways in nature. A principal route is upon oxidation of L-malate, catalyzed by malate dehydrogenase, in the citric acid cycle. Malate is also oxidized by succinate dehydrogenase in a slow reaction with the initial product being enol-oxaloacetate.
It also arises from the condensation of pyruvate with carbonic acid, driven by the hydrolysis of ATP:
CH3C(O)CO2− + HCO3− + ATP → −O2CCH2C(O)CO2− + ADP + Pi
Occurring in the mesophyll of plants, this process proceeds via phosphoenolpyruvate, catalysed by phosphoenolpyruvate carboxylase. Oxaloacetate can also arise from trans- or de- amination of aspartic acid.
Biochemical functions
Oxaloacetate is an intermediate of the citric acid cycle, where it reacts with acetyl-CoA to form citrate, catalyzed by citrate synthase. It is also involved in gluconeogenesis, the urea cycle, the glyoxylate cycle, amino acid synthesis, and fatty acid synthesis. Oxaloacetate is also a potent inhibitor of complex II.
Gluconeogenesis
Gluconeogenesis is a metabolic pathway consisting of a series of eleven enzyme-catalyzed reactions, resulting in the generation of glucose from non-carbohydrates substrates. The beginning of this process takes place in the mitochondrial matrix, where pyruvate molecules are found. A pyruvate molecule is carboxylated by a pyruvate carboxylase enzyme, activated by a molecule each of ATP and water. This reaction results in the formation of oxaloacetate. NADH reduces oxaloacetate to malate. This transformation is needed to transport the molecule out of the mitochondria. Once in the cytosol, malate is oxidized to oxaloacetate again using NAD+. Then oxaloacetate remains in the cytosol, where the rest of reactions will take place. Oxaloacetate is later decarboxylated and phosphorylated by phosphoenolpyruvate carboxykinase and becomes 2-phosphoenolpyruvate using guanosine triphosphate (GTP) as phosphate source. Glucose is obtained after further downstream processing.
Urea cycle
The urea cycle is a metabolic pathway that results in the formation of urea using one ammonium molecule from degraded amino acids, another ammonium group from aspartate and one bicarbonate molecule. This route commonly occurs in hepatocytes. The reactions related to the urea cycle produce NADH, and NADH can be produced in two different ways. One of these uses oxaloacetate. In the cytosol there are fumarate molecules. Fumarate can be transformed into malate by the actions of the enzyme fumarase. Malate is acted on by malate dehydrogenase to become oxaloacetate, producing a molecule of NADH. After that, oxaloacetate will be recycled to aspartate, as transaminases prefer these keto acids over the others. This recycling maintains the flow of nitrogen into the cell.
Glyoxylate cycle
The glyoxylate cycle is a variant of the citric acid cycle. It is an anabolic pathway occurring in plants and bacteria utilizing the enzymes isocitrate lyase and malate synthase. Some intermediate steps of the cycle are slightly different from the citric acid cycle; nevertheless oxaloacetate has the same function in both processes. This means that oxaloacetate in this cycle also acts as the primary reactant and final product. In fact the oxaloacetate is a net product of the glyoxylate cycle because its loop of the cycle incorporates two molecules of acetyl-CoA.
Fatty acid synthesis
In previous stages acetyl-CoA is transferred from the mitochondria to the cytoplasm where fatty acid synthase resides. The acetyl-CoA is transported as a citrate, which has been previously formed in the mitochondrial matrix from acetyl-CoA and oxaloacetate. This reaction usually initiates the citric acid cycle, but when there is no need of energy it is transported to the cytoplasm where it is broken down to cytoplasmic acetyl-CoA and oxaloacetate.
Another part of the cycle requires NADPH for the synthesis of fatty acids. Part of this reducing power is generated when the cytosolic oxaloacetate is returned to the mitochondria as long as the internal mitochondrial layer is non-permeable for oxaloacetate. Firstly the oxaloacetate is reduced to malate using NADH. Then the malate is decarboxylated to pyruvate. Now this pyruvate can easily enter the mitochondria, where it is carboxylated again to oxaloacetate by pyruvate carboxylase. In this way, the transfer of acetyl-CoA that is from the mitochondria into the cytoplasm produces a molecule of NADH. The overall reaction, which is spontaneous, may be summarized as:
HCO3– + ATP + acetyl-CoA → ADP + Pi + malonyl-CoA
Amino acid synthesis
Six essential amino acids and three nonessential are synthesized from oxaloacetate and pyruvate. Aspartate and alanine are formed from oxaloacetate and pyruvate, respectively, by transamination from glutamate. Asparagine is synthesized by amidation of aspartate, with glutamine donating the NH4.
These are nonessential amino acids, and their simple biosynthetic pathways occur in all organisms. Methionine, threonine, lysine, isoleucine, valine, and leucine are essential amino acids in humans and most vertebrates. Their biosynthetic pathways in bacteria are complex and interconnected.
Oxalate biosynthesis
Oxaloacetate produces oxalate by hydrolysis.
oxaloacetate + H2O oxalate + acetate
This process is catalyzed by the enzyme oxaloacetase. This enzyme is seen in plants, but is not known in the animal kingdom.
Interactive pathway map
See also
Dioxosuccinic acid
Glycolysis
Oxidative phosphorylation
Citric acid cycle
References
Citric acid cycle compounds
Dicarboxylic acids
Alpha-keto acids
Beta-keto acids
Metabolic intermediates
Biomolecules | Oxaloacetic acid | Chemistry,Biology | 1,744 |
17,291,837 | https://en.wikipedia.org/wiki/Arsenic%20trifluoride | Arsenic trifluoride is a chemical compound of arsenic and fluorine with the chemical formula AsF3. It is a colorless liquid which reacts readily with water. Like other inorganic arsenic compounds, it is highly toxic.
Preparation and properties
It can be prepared by reacting hydrogen fluoride, HF, with arsenic trioxide:
6HF + As2O3 → 2AsF3 + 3H2O
It has a pyramidal molecular structure in the gas phase which is also present in the solid. In the gas phase the As-F bond length is 170.6 pm and the F-As-F bond angle 96.2°.
Arsenic trifluoride is used as a fluorinating agent for the conversion of non-metal chlorides to fluorides, in this respect it is less reactive than SbF3.
Salts containing AsF4− anion can be prepared for example CsAsF4. the potassium salt KAs2F7 prepared from KF and AsF3 contains AsF4− and AsF3 molecules with evidence of interaction between the AsF3 molecule and the anion.
AsF3 reacts with SbF5. The product obtained could be described as the ionic compound AsF2+ SbF6−. However, the authors conclude the formed product cannot be viewed only as an ionic compound nor entirely as the neutral adduct AsF3SbF5. The crystal structure displays characteristics of both an ionic pair, and a neutral adduct, taking the middle ground in between both models.
References
Arsenic(III) compounds
Arsenic halides
Fluorides
Fluorinating agents | Arsenic trifluoride | Chemistry | 334 |
46,564,590 | https://en.wikipedia.org/wiki/Dicyanamide | Dicyanamide, also known as dicyanamine, is an anion having the formula . It contains two cyanide groups bound to a central nitrogen anion. The chemical is formed by decomposition of 2-cyanoguanidine. It is used extensively as a counterion of organic and inorganic salts, and also as a reactant for the synthesis of various covalent organic structures.
Dicyanimide was used as an anionic component in an organic superconductor that was, when reported in 1990, a superconductor with the highest transition temperature in its structural class. Dean Kenyon has examined the role of this chemical in reactions that can produce peptides. A co-worker then considered this reactive nature and examined the possible role dicyanamide may have had in primordial biogenesis.
References
Nitriles
Anions | Dicyanamide | Physics,Chemistry | 177 |
11,471,843 | https://en.wikipedia.org/wiki/Phaeosphaerella%20mangiferae | Phaeosphaerella mangiferae is a plant pathogen affecting mangoes.
See also
List of mango diseases
References
Fungal plant pathogens and diseases
Mango tree diseases
Pleosporales
Fungus species | Phaeosphaerella mangiferae | Biology | 43 |
45,508,157 | https://en.wikipedia.org/wiki/80P/Peters%E2%80%93Hartley | 80P/Peters–Hartley is a periodic comet in the Solar System with an orbital period of 8.12 years.
It was originally discovered by Christian Heinrich Friedrich Peters of Capodimonte Observatory, Naples, Italy. There was insufficient data to accurately compute the orbit, and the comet was lost for well over a hundred years.
It was accidentally rediscovered by Malcolm Hartley at the UK Schmidt Telescope Unit, Siding Spring, Australia on a photographic plate exposed on 11 July 1982. He estimated its brightness at a magnitude of 15. The sighting was confirmed by the Perth Observatory, where M. C. Candy calculated the orbit and concluded that Hartley had indeed relocated the lost Peter's comet. I. Hasegawa and Syuichi Nakano had simultaneously reached the same conclusion.
It was observed at its next apparition in 1990 by R. H. McNaught of the Siding Spring observatory, who described as diffuse with a brightness of magnitude 14. It was subsequently observed in 1998, 2006 and 2014.
See also
List of numbered comets
List of comets by type
References
External links
Periodic comets
080P
0080
080P
18460626
Recovered astronomical objects | 80P/Peters–Hartley | Astronomy | 234 |
709,308 | https://en.wikipedia.org/wiki/Montague%20grammar | Montague grammar is an approach to natural language semantics, named after American logician Richard Montague. The Montague grammar is based on mathematical logic, especially higher-order predicate logic and lambda calculus, and makes use of the notions of intensional logic, via Kripke models. Montague pioneered this approach in the 1960s and early 1970s.
Overview
Montague's thesis was that natural languages (like English) and formal languages (like programming languages) can be treated in the same way:
There is in my opinion no important theoretical difference between natural languages and the artificial languages of logicians; indeed, I consider it possible to comprehend the syntax and semantics of both kinds of language within a single natural and mathematically precise theory. On this point I differ from a number of philosophers, but agree, I believe, with Chomsky and his associates. ("Universal Grammar" 1970)
Montague published what soon became known as Montague grammar in three papers:
1970: "Universal grammar" (= UG)
1970: "English as a Formal Language" (= EFL)
1973: "The Proper Treatment of Quantification in Ordinary English" (= PTQ)
Illustration
Montague grammar can represent the meanings of quite complex sentences
compactly. Below is a grammar presented in Eijck and Unger's textbook.
The types of the syntactic categories in the grammar are as follows, with t
denoting a term (a reference to an entity) and f denoting a formula.
The meaning of a sentence obtained by the rule is obtained by
applying the function for NP to the function for VP.
The types of VP and NP might appear unintuitive because of the question as to the meaning of a noun phrase that is not simply a term. This is because meanings of many noun phrases, such as "the man who whistles", are not just terms in predicate logic, but also include a predicate for the activity, like "whistles", which cannot be represented in the term (consisting of constant and function symbols but not of predicates). So we need some term, for example x, and a formula whistles(x) to refer to the man who whistles. The meaning of verb phrases VP can be expressed with that term, for example stating that a particular x satisfies sleeps(x) snores(x) (expressed as a function from x to that formula). Now the function associated with NP takes that kind of function and combines it with the formulas needed to express the meaning of the noun phrase. This particular way of stating NP and VP is not the only possible one.
Key is the meaning of an expression is obtained as a function of its components, either by function application (indicated by boldface parentheses enclosing function and argument) or by constructing a new function from the functions associated with the component. This compositionality makes it possible to assign meanings reliably to arbitrarily complex sentence structures, with auxiliary clauses and many other complications.
The meanings of other categories of expressions are either similarly function applications, or higher-order functions. The following are the rules of the grammar, with
the first column indicating a non-terminal symbol, the second column one possible
way of producing that non-terminal from other non-terminals and terminals,
and the third column indicating the corresponding meaning.
Here are example expressions and their associated meaning, according to the above grammar, showing that the meaning of a given sentence is formed from its constituent
expressions, either by forming a new higher-order function, or by applying
a higher-order function for one expression to the meaning of another.
The following are other examples of sentences translated into the predicate logic by the grammar.
In popular culture
In David Foster Wallace's novel Infinite Jest, the protagonist Hal Incandenza has written an essay entitled Montague Grammar and the Semantics of Physical Modality. Montague grammar is also referenced explicitly and implicitly several times throughout the book.
See also
References
Further reading
Richmond Thomason (ed.): Formal Philosophy. Selected Papers by Richard Montague. New Haven, 1974,
Paul Portner, Barbara H. Partee (eds.): Formal Semantics: The Essential Readings, Blackwell, 2002.
D. R. Dowty, R.E. Wall and S. Peters: Introduction to Montague Semantics. Kluwer Academic Publishers, 1981,
Emmon Bach: Informal Lectures on Formal Semantics. SUNY Press, 1989,
B. H. Partee, A.G.B. ter Meulen and R.E. Wall: Mathematical Methods in Linguistics. Kluwer Academic Publishers, 1990,
B. H. Partee with Herman Hendriks: Montague Grammar. In: Handbook of Logic and Language, eds. J.F.A.K. van Benthem and A. G. B. ter Meulen Elsevier/MIT Press, 1997, pp. 5–92.
Reinhard Muskens Type-logical Semantics to appear in the Routledge Encyclopedia of Philosophy Online (contains an annotated bibliography).
External links
A Free Montague Parser in a non-deterministic extension of Common Lisp.
Montague Grammar in historical context. / The theory and the substance of Montague grammar. Central principles. / Further developments and controversies. by Barbara H. Partee.
Grammar
Semantics
Formal languages
Lambda calculus | Montague grammar | Mathematics | 1,098 |
86,092 | https://en.wikipedia.org/wiki/Bile | Bile (from Latin bilis), or gall, is a yellow-green/misty green fluid produced by the liver of most vertebrates that aids the digestion of lipids in the small intestine. In humans, bile is primarily composed of water, is produced continuously by the liver, and is stored and concentrated in the gallbladder. After a human eats, this stored bile is discharged into the first section of the small intestine.
Composition
In the human liver, bile is composed of 97–98% water, 0.7% bile salts, 0.2% bilirubin, 0.51% fats (cholesterol, fatty acids, and lecithin), and 200 meq/L inorganic salts. The two main pigments of bile are bilirubin, which is orange-yellow, and its oxidised form biliverdin, which is green. When mixed, they are responsible for the brown color of feces. About of bile is produced per day in adult human beings.
Function
Bile or gall acts to some extent as a surfactant, helping to emulsify the lipids in food. Bile salt anions are hydrophilic on one side and hydrophobic on the other side; consequently, they tend to aggregate around droplets of lipids (triglycerides and phospholipids) to form micelles, with the hydrophobic sides towards the fat and hydrophilic sides facing outwards. The hydrophilic sides are negatively charged, and this charge prevents fat droplets coated with bile from re-aggregating into larger fat particles. Ordinarily, the micelles in the duodenum have a diameter around 1–50 μm in humans.
The dispersion of food fat into micelles provides a greatly increased surface area for the action of the enzyme pancreatic lipase, which digests the triglycerides, and is able to reach the fatty core through gaps between the bile salts. A triglyceride is broken down into two fatty acids and a monoglyceride, which are absorbed by the villi on the intestine walls. After being transferred across the intestinal membrane, the fatty acids reform into triglycerides (), before being absorbed into the lymphatic system through lacteals. Without bile salts, most of the lipids in food would be excreted in feces, undigested.
Since bile increases the absorption of fats, it is an important part of the absorption of the fat-soluble substances, such as the vitamins A, D, E, and K.
Besides its digestive function, bile serves also as the route of excretion for bilirubin, a byproduct of red blood cells recycled by the liver. Bilirubin derives from hemoglobin by glucuronidation.
Bile tends to be alkaline on average. The pH of common duct bile (7.50 to 8.05) is higher than that of the corresponding gallbladder bile (6.80 to 7.65). Bile in the gallbladder becomes more acidic the longer a person goes without eating, though resting slows this fall in pH. As an alkali, it also has the function of neutralizing excess stomach acid before it enters the duodenum, the first section of the small intestine. Bile salts also act as bactericides, destroying many of the microbes that may be present in the food.
Clinical significance
In the absence of bile, fats become indigestible and are instead excreted in feces, a condition called steatorrhea. Feces lack their characteristic brown color and instead are white or gray, and greasy. Steatorrhea can lead to deficiencies in essential fatty acids and fat-soluble vitamins. In addition, past the small intestine (which is normally responsible for absorbing fat from food) the gastrointestinal tract and gut flora are not adapted to processing fats, leading to problems in the large intestine.
The cholesterol contained in bile will occasionally accrete into lumps in the gallbladder, forming gallstones. Cholesterol gallstones are generally treated through surgical removal of the gallbladder. However, they can sometimes be dissolved by increasing the concentration of certain naturally occurring bile acids, such as chenodeoxycholic acid and ursodeoxycholic acid.
On an empty stomach – after repeated vomiting, for example – a person's vomit may be green or dark yellow, and very bitter. The bitter and greenish component may be bile or normal digestive juices originating in the stomach. Bile may be forced into the stomach secondary due to a weakened valve (pylorus), the presence of certain drugs including alcohol, or powerful muscular contractions and duodenal spasms. This is known as biliary reflux.
Obstruction
Biliary obstruction refers to a condition when bile ducts which deliver bile from the gallbladder or liver to the duodenum become obstructed. The blockage of bile might cause a buildup of bilirubin in the bloodstream which can result in jaundice. There are several potential causes for biliary obstruction including gallstones, cancer, trauma, choledochal cysts, or other benign causes of bile duct narrowing. The most common cause of bile duct obstruction is when gallstone(s) are dislodged from the gallbladder into the cystic duct or common bile duct resulting in a blockage. A blockage of the gallbladder or cystic duct may cause cholecystitis. If the blockage is beyond the confluence of the pancreatic duct, this may cause gallstone pancreatitis. In some instances of biliary obstruction, the bile may become infected by bacteria resulting in ascending cholangitis.
Society and culture
In medical theories prevalent in the West from classical antiquity to the Middle Ages, the body's health depended on the equilibrium of four "humors", or vital fluids, two of which related to bile: blood, phlegm, "yellow bile" (choler), and "black bile". These "humors" are believed to have their roots in the appearance of a blood sedimentation test made in open air, which exhibits a dark clot at the bottom ("black bile"), a layer of unclotted erythrocytes ("blood"), a layer of white blood cells ("phlegm") and a layer of clear yellow serum ("yellow bile").
Excesses of black bile and yellow bile were thought to produce depression and aggression, respectively, and the Greek names for them gave rise to the English words cholera (from Greek χολή kholē, "bile") and melancholia. In the former of those senses, the same theories explain the derivation of the English word bilious from bile, the meaning of gall in English as "exasperation" or "impudence", and the Latin word cholera, derived from the Greek kholé, which was passed along into some Romance languages as words connoting anger, such as colère (French) and cólera (Spanish).
Soap
Soap can be mixed with bile from mammals, such as ox gall. This mixture, called bile soap or gall soap, can be applied to textiles a few hours before washing as a traditional and effective method for removing various kinds of tough stains.
Food
Pinapaitan is a dish in Philippine cuisine that uses bile as flavoring. Other areas where bile is commonly used as a cooking ingredient include Laos and northern parts of Thailand.
During the Boshin War, Satsuma soldiers of the early Imperial Japanese Army reportedly ate human livers boiled in bile. The practice of eating a slain enemy's liver, known as , was a tradition of the Satsuma people.
Bears
In regions where bile products are a popular ingredient in traditional medicine, the use of bears in bile-farming has been widespread. This practice has been condemned by activists, and some pharmaceutical companies have developed synthetic (non-ursine) alternatives.
Principal acids
See also
Bile acid sequestrant
Enterohepatic circulation
Intestinal juice
References
Further reading
Seleem HM, Nada AS, Naguib MA, Abdelmaksoud OR, El-Gazzarah AR (2021). Serum immunoglobulin G4 in patients with nonmalignant common bile duct stricture. Menoufia Med J; 34:1275-83.
Body fluids
Digestive system
Biomolecules
Hepatology | Bile | Chemistry,Biology | 1,784 |
56,842,208 | https://en.wikipedia.org/wiki/Fredrick%20Christian%20Sorensen%20House | The Fredrick Christian Sorensen House, on E. Center St. in Ephraim, Utah, was built in c.1870. It was listed on the National Register of Historic Places in 1980.
It is an adobe one-and-a-half-story pair-house with its front facade including one window into each of its side rooms, and window-door-window into its center room. The house is long.
References
Pair-houses
Houses on the National Register of Historic Places in Utah
Houses completed in 1870
Sanpete County, Utah | Fredrick Christian Sorensen House | Engineering | 113 |
41,346,435 | https://en.wikipedia.org/wiki/C9H6O6 | {{DISPLAYTITLE:C9H6O6}}
The molecular formula C9H6O6 may refer to:
Hemimellitic acid (benzene-1,2,3-tricarboxylic acid)
Trimellitic acid (benzene-1,2,4-tricarboxylic acid)
Trimesic acid (benzene-1,3,5-tricarboxylic acid)
Molecular formulas | C9H6O6 | Physics,Chemistry | 95 |
11,569,301 | https://en.wikipedia.org/wiki/Septobasidium%20bogoriense | Septobasidium bogoriense is a plant pathogen, one of a number of fungi in the genus Septobasidium responsible for the disease of tea plants known commonly as "velvet blight".
References
External links
Index Fungorum
USDA ARS Fungal Database
Fungal plant pathogens and diseases
Tea diseases
Pucciniomycotina
Taxa named by Narcisse Théophile Patouillard
Fungi described in 1899
Fungus species | Septobasidium bogoriense | Biology | 87 |
14,892,880 | https://en.wikipedia.org/wiki/Equinalysis | Equinalysis is a computer software program designed to capture and analyse equine locomotion by visually tracking and quantifying biomechanical data. The system was developed in 2004 by consultant farrier, Haydn Price with the intent of allowing veterinarians, farriers, horse trainers and physiotherapists to highlight subtle changes in a horse's locomotion and provide a video record of how a horse's movements change during the course of its working life. This then allows the user to improve the horse's performance with various techniques and treatment plans, such as appropriate shoeing regimes.
Operation
For the analysis, polystyrene markers are placed at specific points on the horse's limbs, mainly over the joints. Then the horse is walked and trotted in-hand, and filmed with a video camera from all angles on a hard, flat surface. The information is then collated and downloaded on to a CD or DVD, which is analyzed on a computer by an accredited individual. The specialist software program records the movement of the markers and produces data that can be used to quantify stride length, body symmetry, joint flexion and extension, and soundness. The resulting baseline of facts - which is presented in a hard-copy portfolio of information for future reference - then provides the horse owner with a valuable 'baseline measurement' of movement and soundness.
Reliability
A 2011 study published in the Journal of Equine Veterinary Science found the system did not produce repeatable data from day to day, and was therefore not sufficiently reliable for use in clinical evaluations of equine lameness.
See also
Skeletal system of the horse
References
Horse anatomy
Animal physiology | Equinalysis | Biology | 345 |
51,486,243 | https://en.wikipedia.org/wiki/Stone%20Village%20Historic%20District | The Stone Village Historic District encompasses a distinctive collection of stone buildings on Vermont Route 103 in Chester, Vermont, United States. Dating to the first half of the 19th century are a remarkable concentration of buildings constructed in a regionally distinctive snecked ashlar technique brought to the area by Scottish masons. The district was listed on the National Register of Historic Places in 1974.
Description and history
In the early 1830s, skilled masons from Scotland came to central Vermont to work on building projects there. A number of these, mainly from the Aberdeen area, were experienced in snecked ashlar construction, in which plates of stone are affixed to a rubblestone wall. This type of construction is generally rare in the United States, and is found on about 50 surviving buildings in the state of Vermont. The highest concentration of them is on the north side of Chester Depot village, lining Vermont Route 103, and is known locally as the Stone Village.
Two Scottish masons, brothers Alison and Wiley Clark, came to the town of Chester in 1832 to work on large factory building (now no longer standing). In 1834, Doctor Ptolmey Edson hired the brothers to build his house, which was the first snecked ashlar structure in the village. It was followed by a series of other buildings, most of which are residences. The church and district school were also built of stone, possibly due to the influence of Dr. Edson, who sat on their respective building committees. Most of the houses are either Cape-style houses of stories or two-story structures, in either case with some Greek Revival styling in the trim details. Thirteen of the seventeen buildings in the district are stone; the other four date to a similar time period (roughly 1830–50). One building, a large wood-frame tavern house at the northern end of the district, was destroyed by fire in 2012.
See also
National Register of Historic Places listings in Windsor County, Vermont
References
Historic districts on the National Register of Historic Places in Vermont
National Register of Historic Places in Windsor County, Vermont
Greek Revival architecture in Vermont
Historic districts in Windsor County, Vermont
Chester, Vermont
Scottish-American culture in Vermont
Stonemasonry | Stone Village Historic District | Engineering | 443 |
63,755,186 | https://en.wikipedia.org/wiki/Bicycle%20counter | Bicycle counters are electronic devices that detect the number of bicycles passing by a location for a certain period of time. Some advanced counters can also detect the speed, direction, and type of bicycles. These systems are sometimes referred to as bicycle barometers, but the term is misleading because it indicates the measurement of pressure. Most counting stations only consist of sensors, the internal computing device, although some use a display to show the total number of cyclists of the day and the current year. There are counting stations all over the world in over hundreds of cities, for example in Manchester, Zagreb, or Portland. The first bicycle counting station was installed in Odense, Denmark, in 2002.
Persuasive aspects
Bicycle counters are mainly being installed to assist city planning with reliable data on the development of bicycle usage. Bicycle counting stations are said to raise awareness for cycling as a mode of transportation, encourage more people to use their bicycles and give cyclists acknowledgement. There has been no representative study on the impact of bicycle counters on citizens or by-passers, but some early empirical clues that urban visualizations can "become appropriate communication media for sharing, discussing, and co-producing socially relevant data".
To increase visibility, bicycle counters are mostly installed at positions with high traffic volume and visibility to a range of road users.
They have been called urban visualizations and fulfill certain criteria of ambient intelligence, such as being embedded, context-aware and adaptive. Bicycle counting stations can be described as persuasive technology.
"Through sensing technology, a display can act as a tool that increases the capability to capture a behavior (e.g., measuring residential energy consumption, bicycle use, etc.); through its visual imagery, it can function as a medium that provides useful information, such as behavioral statistics or cause-and-effect relationships; and through its networking ability, it can become a social actor, encouraging community-based feedback and social interaction".
Technical setup
Different techniques are used for detection of bicycles, such as built in induction loops, piezoelectric strips, pneumatic hoses, infrared sensing or cameras. Different setups provide different advantages such as more precise counting, battery life, reduced costs or differentiation between different road users such as cyclists, pedestrians or cars. Independent testing has shown that pneumatic tubes can record with over 95% accuracy and piezoelectric sensors reach 99% accuracy. Manufacturers state a 90% precision for induction loops.
Data
Unlike manual counting or other bicycle related interventions or citizen science, where citizens manually put in data, bicycle counting stations automatically generate citizen related data. Automatic counting systems are said to be cheaper than manual counting by people. Because of the use of communication technology in the urban context, bicycle counters can be counted as smart city technology, urban informatics or urban computing. Most of the organizations who install bicycle counters, provide the number of cyclists as open data.
Criticism
There has been criticism on the precision of the counting and on the cost of bicycle counters as a waste of tax money (14000-31000€).
See also
Different cities, such as Bonn or Lahti mentioned cyclists that are a round number of counting (like number 100.000).
Cycling barometer is also the name of a ranking by the European Cyclists' Federation for the most bicycle-friendly nations in the EU.
There has been creative use of the data generated by counting stations, such as an information design poster which includes number of daily cyclists, precipitation and temperature.
Gallery
References
Cycling infrastructure
Road traffic management
Bicycle transportation planning
Road transport
Counting instruments | Bicycle counter | Mathematics,Technology,Engineering | 719 |
27,161,748 | https://en.wikipedia.org/wiki/Ground%20field | In mathematics, a ground field is a field K fixed at the beginning of the discussion.
Use
It is used in various areas of algebra:
In linear algebra
In linear algebra, the concept of a vector space may be developed over any field.
In algebraic geometry
In algebraic geometry, in the foundational developments of André Weil the use of fields other than the complex numbers was essential to expand the definitions to include the idea of abstract algebraic variety over K, and generic point relative to K.
In Lie theory
Reference to a ground field may be common in the theory of Lie algebras (qua vector spaces) and algebraic groups (qua algebraic varieties).
In Galois theory
In Galois theory, given a field extension L/K, the field K that is being extended may be considered the ground field for an argument or discussion. Within algebraic geometry, from the point of view of scheme theory, the spectrum Spec(K) of the ground field K plays the role of final object in the category of K-schemes, and its structure and symmetry may be richer than the fact that the space of the scheme is a point might suggest.
In Diophantine geometry
In diophantine geometry the characteristic problems of the subject are those caused by the fact that the ground field K is not taken to be algebraically closed. The field of definition of a variety given abstractly may be smaller than the ground field, and two varieties may become isomorphic when the ground field is enlarged, a major topic in Galois cohomology.
Notes
Field (mathematics) | Ground field | Mathematics | 315 |
620,529 | https://en.wikipedia.org/wiki/Grader | A grader, also commonly referred to as a road grader, motor grader, or simply blade, is a form of heavy equipment with a long blade used to create a flat surface during grading. Although the earliest models were towed behind horses, and later tractors, most modern graders are self-propelled and thus technically "motor graders".
Typical graders have three axles, with the steering wheels in front, followed by the grading blade or mouldboard, then a cab and engine atop tandem rear axles. Some graders also have front-wheel drives for improved performance. Some graders have optional rear attachments, such as a ripper, scarifier, or compactor. A blade forward of the front axle may also be added. For snowplowing and some dirt grading operations, a main blade extension can also be mounted.
Capacities range from a blade width of 2.50 to 7.30 m (8 to 24 ft) and engines from 93–373 kW (125–500 hp). Certain graders can operate multiple attachments, or be designed for specialized tasks like underground mining.
Function
In civil engineering "rough grading" is performed by heavy equipment such as wheel tractor-scrapers and bulldozers. Graders are used to "finish grade", with the angle, tilt (or pitch), and height of their blade capable of being adjusted to a high level of precision.
Graders are commonly used in the construction and maintenance of dirt and gravel roads. In constructing paved roads, they prepare a wide flat base course for the final road surface. Graders are also used to set native soil or gravel foundation pads to finish grade before the construction of large buildings. Graders can produce canted surfaces for drainage or safety. They may be used to produce drainage ditches with shallow V-shaped cross-sections on either side of highways.
Steering is performed via a steering wheel, or a joystick capable of controlling both the angle and cant of the front wheels. Many models also allow frame articulation between the front and rear axles, which allows a smaller turning radius in addition to allowing the operator to adjust the articulation angle to aid in the efficiency of moving material. Other implement functions are typically hydraulically powered and can be directly controlled by levers, or by joystick inputs or electronic switches controlling electrohydraulic servo valves.
Graders are also outfitted with modern digital grade control technologies, such as those manufactured by Topcon Positioning Systems, Inc., Trimble Navigation, Leica Geosystems, or Mikrofyn. These may combine both laser and GPS guidance to establish precise grade control and (potentially) "stateless" construction. Manufacturers such as John Deere have also begun to integrate these technologies during construction.
History
Early graders were drawn by humans and draft animals. The Fresno Scraper is a machine pulled by horses used for constructing canals and ditches in sandy soil. The design of the Fresno Scraper forms the basis of most modern earthmoving scrapers, having the ability to scrape and move a quantity of soil, and also to discharge it at a controlled depth, thus quadrupling the volume which could be handled manually. The Fresno scraper was invented in 1883 by James Porteous. Working with farmers in Fresno, California, he had recognised the dependence of the Central San Joaquin Valley on irrigation, and the need for a more efficient means of constructing canals and ditches in the sandy soil. In perfecting the design of his machine, Porteous made several revisions on his own and also traded ideas with William Deidrick, Frank Dusy, and Abijah McCall, who invented and held patents on similar scrapers.
The era of motorization by traction engines, steam tractors, motor trucks, and tractors saw such towed graders grow in size and productivity. The first self-propelled grader was made in 1920 by the Russell Grader Manufacturing Company, which called it the Russell Motor Hi-Way Patrol. These early graders were created by adding the grader blade as an attachment to a generalist tractor unit. After purchasing the company in 1928, Caterpillar went on to truly integrate the tractor and grader into one design—at the same time replacing crawler tracks with wheels to yield the first rubber-tire self-propelled grader, the Caterpillar Auto Patrol, released in 1931.
Regional uses
In addition to their use in road construction, graders may also be used to perform roughly equivalent work.
In some locales such as Northern Europe, Canada, and places in the United States, graders are often used in municipal and residential snow removal. In scrubland and grassland areas of Australia and Africa, graders are often an essential piece of equipment on ranches, large farms, and plantations to make dirt tracks where the absence of rocks and trees means bulldozers are not required.
Manufacturers
Case Construction Equipment
Caterpillar Inc.
Deere & Company
Galion Iron Works
HEPCO
Komatsu Limited
LiuGong Construction Machinery, LLC.
Mitsubishi Heavy Industries
New Holland Construction
Sany
SDLG
Terex Corporation
Volvo
XCMG
See also
King road drag
Land grading
References
External links
Video with technical development from graders
photos/videos from different types of grader works
A Road-Scraper That Cuts Through Snow, Popular Science monthly, February 1919, page 26, Scanned by Google Books: https://books.google.com/books?id=7igDAAAAMBAJ&pg=PA26
http://www.wisegeek.com/what-are-road-graders.htm
Construction equipment
Engineering vehicles
Heavy equipment
Road construction
Snow removal
American inventions | Grader | Engineering | 1,136 |
71,918,023 | https://en.wikipedia.org/wiki/Futibatinib | Futibatinib, sold under the brand name Lytgobi, is an anti-cancer medication used for the treatment of cholangiocarcinoma (bile duct cancer). It is a kinase inhibitor. It is taken by mouth.
Futibatinib was approved for medical use in the United States in September 2022, in Japan in June 2023 and in the European Union in July 2023.
Medical uses
Futibatinib is indicated for the treatment of adults with previously treated, unresectable, locally advanced or metastatic intrahepatic cholangiocarcinoma harboring fibroblast growth factor receptor 2 (FGFR2) gene fusions or other rearrangements.
Society and culture
Legal status
On 26 April 2023, the Committee for Medicinal Products for Human Use (CHMP) of the European Medicines Agency (EMA) adopted a positive opinion, recommending the granting of a conditional marketing authorization for the medicinal product Lytgobi, intended for the second-line treatment of locally advanced or metastatic cholangiocarcinoma characterized by fusion or rearrangements of fibroblast growth factor receptor (FGFR) 2. The applicant for this medicinal product is Taiho Pharma Netherlands B.V. Futibatinib was approved for medical use in the European Union in July 2023.
Names
Futibatinib is the international nonproprietary name (INN).
References
Antineoplastic drugs
Pyrazolopyrimidines
Methoxy compounds
Amines
Orphan drugs | Futibatinib | Chemistry | 326 |
35,306,944 | https://en.wikipedia.org/wiki/Explosive%20gas%20leak%20detector | An explosive gas leak detector is a device used to detect explosive gas leaks in enclosed spaces. Typically, a local alarm will be triggered, and optionally a remote alarm may also be connected.
Application
Carbon monoxide detectors will not detect explosive mixtures; thus the device is often recommended to complement the CO detector. Combination explosive gas leak and carbon monoxide detectors exist.
Placement
A detector for propane is best placed down low near the floor, as propane is heavier than air. A detector for natural gas (city gas) is best placed up high, near the ceiling. Some detectors can detect both natural gas or propane, but this requires a compromise location.
References
Detectors
Active fire protection
Fire detection and alarm
Gas sensors
Natural gas safety
Safety equipment | Explosive gas leak detector | Chemistry | 151 |
9,028,799 | https://en.wikipedia.org/wiki/Bacteria | Bacteria (; : bacterium) are ubiquitous, mostly free-living organisms often consisting of one biological cell. They constitute a large domain of prokaryotic microorganisms. Typically a few micrometres in length, bacteria were among the first life forms to appear on Earth, and are present in most of its habitats. Bacteria inhabit the air, soil, water, acidic hot springs, radioactive waste, and the deep biosphere of Earth's crust. Bacteria play a vital role in many stages of the nutrient cycle by recycling nutrients and the fixation of nitrogen from the atmosphere. The nutrient cycle includes the decomposition of dead bodies; bacteria are responsible for the putrefaction stage in this process. In the biological communities surrounding hydrothermal vents and cold seeps, extremophile bacteria provide the nutrients needed to sustain life by converting dissolved compounds, such as hydrogen sulphide and methane, to energy. Bacteria also live in mutualistic, commensal and parasitic relationships with plants and animals. Most bacteria have not been characterised and there are many species that cannot be grown in the laboratory. The study of bacteria is known as bacteriology, a branch of microbiology.
Like all animals, humans carry vast numbers (approximately 1013 to 1014) of bacteria. Most are in the gut, though there are many on the skin. Most of the bacteria in and on the body are harmless or rendered so by the protective effects of the immune system, and many are beneficial, particularly the ones in the gut. However, several species of bacteria are pathogenic and cause infectious diseases, including cholera, syphilis, anthrax, leprosy, tuberculosis, tetanus and bubonic plague. The most common fatal bacterial diseases are respiratory infections. Antibiotics are used to treat bacterial infections and are also used in farming, making antibiotic resistance a growing problem. Bacteria are important in sewage treatment and the breakdown of oil spills, the production of cheese and yogurt through fermentation, the recovery of gold, palladium, copper and other metals in the mining sector (biomining, bioleaching), as well as in biotechnology, and the manufacture of antibiotics and other chemicals.
Once regarded as plants constituting the class Schizomycetes ("fission fungi"), bacteria are now classified as prokaryotes. Unlike cells of animals and other eukaryotes, bacterial cells do not contain a nucleus and rarely harbour membrane-bound organelles. Although the term bacteria traditionally included all prokaryotes, the scientific classification changed after the discovery in the 1990s that prokaryotes consist of two very different groups of organisms that evolved from an ancient common ancestor. These evolutionary domains are called Bacteria and Archaea.
Etymology
The word bacteria is the plural of the Neo-Latin , which is the romanisation of the Ancient Greek (), the diminutive of (), meaning "staff, cane", because the first ones to be discovered were rod-shaped.
Origin and early evolution
The ancestors of bacteria were unicellular microorganisms that were the first forms of life to appear on Earth, about 4 billion years ago. For about 3 billion years, most organisms were microscopic, and bacteria and archaea were the dominant forms of life. Although bacterial fossils exist, such as stromatolites, their lack of distinctive morphology prevents them from being used to examine the history of bacterial evolution, or to date the time of origin of a particular bacterial species. However, gene sequences can be used to reconstruct the bacterial phylogeny, and these studies indicate that bacteria diverged first from the archaeal/eukaryotic lineage. The most recent common ancestor (MRCA) of bacteria and archaea was probably a hyperthermophile that lived about 2.5 billion–3.2 billion years ago. The earliest life on land may have been bacteria some 3.22 billion years ago.
Bacteria were also involved in the second great evolutionary divergence, that of the archaea and eukaryotes. Here, eukaryotes resulted from the entering of ancient bacteria into endosymbiotic associations with the ancestors of eukaryotic cells, which were themselves possibly related to the Archaea. This involved the engulfment by proto-eukaryotic cells of alphaproteobacterial symbionts to form either mitochondria or hydrogenosomes, which are still found in all known Eukarya (sometimes in highly reduced form, e.g. in ancient "amitochondrial" protozoa). Later, some eukaryotes that already contained mitochondria also engulfed cyanobacteria-like organisms, leading to the formation of chloroplasts in algae and plants. This is known as primary endosymbiosis.
Habitat
Bacteria are ubiquitous, living in every possible habitat on the planet including soil, underwater, deep in Earth's crust and even such extreme environments as acidic hot springs and radioactive waste. There are thought to be approximately 2×1030 bacteria on Earth, forming a biomass that is only exceeded by plants. They are abundant in lakes and oceans, in arctic ice, and geothermal springs where they provide the nutrients needed to sustain life by converting dissolved compounds, such as hydrogen sulphide and methane, to energy. They live on and in plants and animals. Most do not cause diseases, are beneficial to their environments, and are essential for life. The soil is a rich source of bacteria and a few grams contain around a thousand million of them. They are all essential to soil ecology, breaking down toxic waste and recycling nutrients. They are even found in the atmosphere and one cubic metre of air holds around one hundred million bacterial cells. The oceans and seas harbour around 3 x 1026 bacteria which provide up to 50% of the oxygen humans breathe. Only around 2% of bacterial species have been fully studied.
Morphology
Size. Bacteria display a wide diversity of shapes and sizes. Bacterial cells are about one-tenth the size of eukaryotic cells and are typically 0.5–5.0 micrometres in length. However, a few species are visible to the unaided eye—for example, Thiomargarita namibiensis is up to half a millimetre long, Epulopiscium fishelsoni reaches 0.7 mm, and Thiomargarita magnifica can reach even 2 cm in length, which is 50 times larger than other known bacteria. Among the smallest bacteria are members of the genus Mycoplasma, which measure only 0.3 micrometres, as small as the largest viruses. Some bacteria may be even smaller, but these ultramicrobacteria are not well-studied.
Shape. Most bacterial species are either spherical, called cocci (singular coccus, from Greek kókkos, grain, seed), or rod-shaped, called bacilli (sing. bacillus, from Latin baculus, stick). Some bacteria, called vibrio, are shaped like slightly curved rods or comma-shaped; others can be spiral-shaped, called spirilla, or tightly coiled, called spirochaetes. A small number of other unusual shapes have been described, such as star-shaped bacteria. This wide variety of shapes is determined by the bacterial cell wall and cytoskeleton and is important because it can influence the ability of bacteria to acquire nutrients, attach to surfaces, swim through liquids and escape predators.
Multicellularity. Most bacterial species exist as single cells; others associate in characteristic patterns: Neisseria forms diploids (pairs), streptococci form chains, and staphylococci group together in "bunch of grapes" clusters. Bacteria can also group to form larger multicellular structures, such as the elongated filaments of Actinomycetota species, the aggregates of Myxobacteria species, and the complex hyphae of Streptomyces species. These multicellular structures are often only seen in certain conditions. For example, when starved of amino acids, myxobacteria detect surrounding cells in a process known as quorum sensing, migrate towards each other, and aggregate to form fruiting bodies up to 500 micrometres long and containing approximately 100,000 bacterial cells. In these fruiting bodies, the bacteria perform separate tasks; for example, about one in ten cells migrate to the top of a fruiting body and differentiate into a specialised dormant state called a myxospore, which is more resistant to drying and other adverse environmental conditions.
Biofilms. Bacteria often attach to surfaces and form dense aggregations called biofilms and larger formations known as microbial mats. These biofilms and mats can range from a few micrometres in thickness to up to half a metre in depth, and may contain multiple species of bacteria, protists and archaea. Bacteria living in biofilms display a complex arrangement of cells and extracellular components, forming secondary structures, such as microcolonies, through which there are networks of channels to enable better diffusion of nutrients. In natural environments, such as soil or the surfaces of plants, the majority of bacteria are bound to surfaces in biofilms. Biofilms are also important in medicine, as these structures are often present during chronic bacterial infections or in infections of implanted medical devices, and bacteria protected within biofilms are much harder to kill than individual isolated bacteria.
Cellular structure
Intracellular structures
The bacterial cell is surrounded by a cell membrane, which is made primarily of phospholipids. This membrane encloses the contents of the cell and acts as a barrier to hold nutrients, proteins and other essential components of the cytoplasm within the cell. Unlike eukaryotic cells, bacteria usually lack large membrane-bound structures in their cytoplasm such as a nucleus, mitochondria, chloroplasts and the other organelles present in eukaryotic cells. However, some bacteria have protein-bound organelles in the cytoplasm which compartmentalise aspects of bacterial metabolism, such as the carboxysome. Additionally, bacteria have a multi-component cytoskeleton to control the localisation of proteins and nucleic acids within the cell, and to manage the process of cell division.
Many important biochemical reactions, such as energy generation, occur due to concentration gradients across membranes, creating a potential difference analogous to a battery. The general lack of internal membranes in bacteria means these reactions, such as electron transport, occur across the cell membrane between the cytoplasm and the outside of the cell or periplasm. However, in many photosynthetic bacteria, the plasma membrane is highly folded and fills most of the cell with layers of light-gathering membrane. These light-gathering complexes may even form lipid-enclosed structures called chlorosomes in green sulfur bacteria.
Bacteria do not have a membrane-bound nucleus, and their genetic material is typically a single circular bacterial chromosome of DNA located in the cytoplasm in an irregularly shaped body called the nucleoid. The nucleoid contains the chromosome with its associated proteins and RNA. Like all other organisms, bacteria contain ribosomes for the production of proteins, but the structure of the bacterial ribosome is different from that of eukaryotes and archaea.
Some bacteria produce intracellular nutrient storage granules, such as glycogen, polyphosphate, sulfur or polyhydroxyalkanoates. Bacteria such as the photosynthetic cyanobacteria, produce internal gas vacuoles, which they use to regulate their buoyancy, allowing them to move up or down into water layers with different light intensities and nutrient levels.
Extracellular structures
Around the outside of the cell membrane is the cell wall. Bacterial cell walls are made of peptidoglycan (also called murein), which is made from polysaccharide chains cross-linked by peptides containing D-amino acids. Bacterial cell walls are different from the cell walls of plants and fungi, which are made of cellulose and chitin, respectively. The cell wall of bacteria is also distinct from that of achaea, which do not contain peptidoglycan. The cell wall is essential to the survival of many bacteria, and the antibiotic penicillin (produced by a fungus called Penicillium) is able to kill bacteria by inhibiting a step in the synthesis of peptidoglycan.
There are broadly speaking two different types of cell wall in bacteria, that classify bacteria into Gram-positive bacteria and Gram-negative bacteria. The names originate from the reaction of cells to the Gram stain, a long-standing test for the classification of bacterial species.
Gram-positive bacteria possess a thick cell wall containing many layers of peptidoglycan and teichoic acids. In contrast, Gram-negative bacteria have a relatively thin cell wall consisting of a few layers of peptidoglycan surrounded by a second lipid membrane containing lipopolysaccharides and lipoproteins. Most bacteria have the Gram-negative cell wall, and only members of the Bacillota group and actinomycetota (previously known as the low G+C and high G+C Gram-positive bacteria, respectively) have the alternative Gram-positive arrangement. These differences in structure can produce differences in antibiotic susceptibility; for instance, vancomycin can kill only Gram-positive bacteria and is ineffective against Gram-negative pathogens, such as Haemophilus influenzae or Pseudomonas aeruginosa. Some bacteria have cell wall structures that are neither classically Gram-positive or Gram-negative. This includes clinically important bacteria such as mycobacteria which have a thick peptidoglycan cell wall like a Gram-positive bacterium, but also a second outer layer of lipids.
In many bacteria, an S-layer of rigidly arrayed protein molecules covers the outside of the cell. This layer provides chemical and physical protection for the cell surface and can act as a macromolecular diffusion barrier. S-layers have diverse functions and are known to act as virulence factors in Campylobacter species and contain surface enzymes in Bacillus stearothermophilus.
Flagella are rigid protein structures, about 20 nanometres in diameter and up to 20 micrometres in length, that are used for motility. Flagella are driven by the energy released by the transfer of ions down an electrochemical gradient across the cell membrane.
Fimbriae (sometimes called "attachment pili") are fine filaments of protein, usually 2–10 nanometres in diameter and up to several micrometres in length. They are distributed over the surface of the cell, and resemble fine hairs when seen under the electron microscope. Fimbriae are believed to be involved in attachment to solid surfaces or to other cells, and are essential for the virulence of some bacterial pathogens. Pili (sing. pilus) are cellular appendages, slightly larger than fimbriae, that can transfer genetic material between bacterial cells in a process called conjugation where they are called conjugation pili or sex pili (see bacterial genetics, below). They can also generate movement where they are called type IV pili.
Glycocalyx is produced by many bacteria to surround their cells, and varies in structural complexity: ranging from a disorganised slime layer of extracellular polymeric substances to a highly structured capsule. These structures can protect cells from engulfment by eukaryotic cells such as macrophages (part of the human immune system). They can also act as antigens and be involved in cell recognition, as well as aiding attachment to surfaces and the formation of biofilms.
The assembly of these extracellular structures is dependent on bacterial secretion systems. These transfer proteins from the cytoplasm into the periplasm or into the environment around the cell. Many types of secretion systems are known and these structures are often essential for the virulence of pathogens, so are intensively studied.
Endospores
Some genera of Gram-positive bacteria, such as Bacillus, Clostridium, Sporohalobacter, Anaerobacter, and Heliobacterium, can form highly resistant, dormant structures called endospores. Endospores develop within the cytoplasm of the cell; generally, a single endospore develops in each cell. Each endospore contains a core of DNA and ribosomes surrounded by a cortex layer and protected by a multilayer rigid coat composed of peptidoglycan and a variety of proteins.
Endospores show no detectable metabolism and can survive extreme physical and chemical stresses, such as high levels of UV light, gamma radiation, detergents, disinfectants, heat, freezing, pressure, and desiccation. In this dormant state, these organisms may remain viable for millions of years. Endospores even allow bacteria to survive exposure to the vacuum and radiation of outer space, leading to the possibility that bacteria could be distributed throughout the universe by space dust, meteoroids, asteroids, comets, planetoids, or directed panspermia.
Endospore-forming bacteria can cause disease; for example, anthrax can be contracted by the inhalation of Bacillus anthracis endospores, and contamination of deep puncture wounds with Clostridium tetani endospores causes tetanus, which, like botulism, is caused by a toxin released by the bacteria that grow from the spores. Clostridioides difficile infection, a common problem in healthcare settings, is caused by spore-forming bacteria.
Metabolism
Bacteria exhibit an extremely wide variety of metabolic types. The distribution of metabolic traits within a group of bacteria has traditionally been used to define their taxonomy, but these traits often do not correspond with modern genetic classifications. Bacterial metabolism is classified into nutritional groups on the basis of three major criteria: the source of energy, the electron donors used, and the source of carbon used for growth.
Phototrophic bacteria derive energy from light using photosynthesis, while chemotrophic bacteria breaking down chemical compounds through oxidation, driving metabolism by transferring electrons from a given electron donor to a terminal electron acceptor in a redox reaction. Chemotrophs are further divided by the types of compounds they use to transfer electrons. Bacteria that derive electrons from inorganic compounds such as hydrogen, carbon monoxide, or ammonia are called lithotrophs, while those that use organic compounds are called organotrophs. Still, more specifically, aerobic organisms use oxygen as the terminal electron acceptor, while anaerobic organisms use other compounds such as nitrate, sulfate, or carbon dioxide.
Many bacteria, called heterotrophs, derive their carbon from other organic carbon. Others, such as cyanobacteria and some purple bacteria, are autotrophic, meaning they obtain cellular carbon by fixing carbon dioxide. In unusual circumstances, the gas methane can be used by methanotrophic bacteria as both a source of electrons and a substrate for carbon anabolism.
In many ways, bacterial metabolism provides traits that are useful for ecological stability and for human society. For example, diazotrophs have the ability to fix nitrogen gas using the enzyme nitrogenase. This trait, which can be found in bacteria of most metabolic types listed above, leads to the ecologically important processes of denitrification, sulfate reduction, and acetogenesis, respectively. Bacterial metabolic processes are important drivers in biological responses to pollution; for example, sulfate-reducing bacteria are largely responsible for the production of the highly toxic forms of mercury (methyl- and dimethylmercury) in the environment. Nonrespiratory anaerobes use fermentation to generate energy and reducing power, secreting metabolic by-products (such as ethanol in brewing) as waste. Facultative anaerobes can switch between fermentation and different terminal electron acceptors depending on the environmental conditions in which they find themselves.
Reproduction and growth
Unlike in multicellular organisms, increases in cell size (cell growth) and reproduction by cell division are tightly linked in unicellular organisms. Bacteria grow to a fixed size and then reproduce through binary fission, a form of asexual reproduction. Under optimal conditions, bacteria can grow and divide extremely rapidly, and some bacterial populations can double as quickly as every 17 minutes. In cell division, two identical clone daughter cells are produced. Some bacteria, while still reproducing asexually, form more complex reproductive structures that help disperse the newly formed daughter cells. Examples include fruiting body formation by myxobacteria and aerial hyphae formation by Streptomyces species, or budding. Budding involves a cell forming a protrusion that breaks away and produces a daughter cell.
In the laboratory, bacteria are usually grown using solid or liquid media. Solid growth media, such as agar plates, are used to isolate pure cultures of a bacterial strain. However, liquid growth media are used when the measurement of growth or large volumes of cells are required. Growth in stirred liquid media occurs as an even cell suspension, making the cultures easy to divide and transfer, although isolating single bacteria from liquid media is difficult. The use of selective media (media with specific nutrients added or deficient, or with antibiotics added) can help identify specific organisms.
Most laboratory techniques for growing bacteria use high levels of nutrients to produce large amounts of cells cheaply and quickly. However, in natural environments, nutrients are limited, meaning that bacteria cannot continue to reproduce indefinitely. This nutrient limitation has led the evolution of different growth strategies (see r/K selection theory). Some organisms can grow extremely rapidly when nutrients become available, such as the formation of algal and cyanobacterial blooms that often occur in lakes during the summer. Other organisms have adaptations to harsh environments, such as the production of multiple antibiotics by Streptomyces that inhibit the growth of competing microorganisms. In nature, many organisms live in communities (e.g., biofilms) that may allow for increased supply of nutrients and protection from environmental stresses. These relationships can be essential for growth of a particular organism or group of organisms (syntrophy).
Bacterial growth follows four phases. When a population of bacteria first enter a high-nutrient environment that allows growth, the cells need to adapt to their new environment. The first phase of growth is the lag phase, a period of slow growth when the cells are adapting to the high-nutrient environment and preparing for fast growth. The lag phase has high biosynthesis rates, as proteins necessary for rapid growth are produced. The second phase of growth is the logarithmic phase, also known as the exponential phase. The log phase is marked by rapid exponential growth. The rate at which cells grow during this phase is known as the growth rate (k), and the time it takes the cells to double is known as the generation time (g). During log phase, nutrients are metabolised at maximum speed until one of the nutrients is depleted and starts limiting growth. The third phase of growth is the stationary phase and is caused by depleted nutrients. The cells reduce their metabolic activity and consume non-essential cellular proteins. The stationary phase is a transition from rapid growth to a stress response state and there is increased expression of genes involved in DNA repair, antioxidant metabolism and nutrient transport. The final phase is the death phase where the bacteria run out of nutrients and die.
Genetics
Most bacteria have a single circular chromosome that can range in size from only 160,000 base pairs in the endosymbiotic bacteria Carsonella ruddii, to 12,200,000 base pairs (12.2 Mbp) in the soil-dwelling bacteria Sorangium cellulosum. There are many exceptions to this; for example, some Streptomyces and Borrelia species contain a single linear chromosome, while some Vibrio species contain more than one chromosome. Some bacteria contain plasmids, small extra-chromosomal molecules of DNA that may contain genes for various useful functions such as antibiotic resistance, metabolic capabilities, or various virulence factors.
Bacteria genomes usually encode a few hundred to a few thousand genes. The genes in bacterial genomes are usually a single continuous stretch of DNA. Although several different types of introns do exist in bacteria, these are much rarer than in eukaryotes.
Bacteria, as asexual organisms, inherit an identical copy of the parent's genome and are clonal. However, all bacteria can evolve by selection on changes to their genetic material DNA caused by genetic recombination or mutations. Mutations arise from errors made during the replication of DNA or from exposure to mutagens. Mutation rates vary widely among different species of bacteria and even among different clones of a single species of bacteria. Genetic changes in bacterial genomes emerge from either random mutation during replication or "stress-directed mutation", where genes involved in a particular growth-limiting process have an increased mutation rate.
Some bacteria transfer genetic material between cells. This can occur in three main ways. First, bacteria can take up exogenous DNA from their environment in a process called transformation. Many bacteria can naturally take up DNA from the environment, while others must be chemically altered in order to induce them to take up DNA. The development of competence in nature is usually associated with stressful environmental conditions and seems to be an adaptation for facilitating repair of DNA damage in recipient cells. Second, bacteriophages can integrate into the bacterial chromosome, introducing foreign DNA in a process known as transduction. Many types of bacteriophage exist; some infect and lyse their host bacteria, while others insert into the bacterial chromosome. Bacteria resist phage infection through restriction modification systems that degrade foreign DNA and a system that uses CRISPR sequences to retain fragments of the genomes of phage that the bacteria have come into contact with in the past, which allows them to block virus replication through a form of RNA interference. Third, bacteria can transfer genetic material through direct cell contact via conjugation.
In ordinary circumstances, transduction, conjugation, and transformation involve transfer of DNA between individual bacteria of the same species, but occasionally transfer may occur between individuals of different bacterial species, and this may have significant consequences, such as the transfer of antibiotic resistance. In such cases, gene acquisition from other bacteria or the environment is called horizontal gene transfer and may be common under natural conditions.
Behaviour
Movement
Many bacteria are motile (able to move themselves) and do so using a variety of mechanisms. The best studied of these are flagella, long filaments that are turned by a motor at the base to generate propeller-like movement. The bacterial flagellum is made of about 20 proteins, with approximately another 30 proteins required for its regulation and assembly. The flagellum is a rotating structure driven by a reversible motor at the base that uses the electrochemical gradient across the membrane for power.
Bacteria can use flagella in different ways to generate different kinds of movement. Many bacteria (such as E. coli) have two distinct modes of movement: forward movement (swimming) and tumbling. The tumbling allows them to reorient and makes their movement a three-dimensional random walk. Bacterial species differ in the number and arrangement of flagella on their surface; some have a single flagellum (monotrichous), a flagellum at each end (amphitrichous), clusters of flagella at the poles of the cell (lophotrichous), while others have flagella distributed over the entire surface of the cell (peritrichous). The flagella of a group of bacteria, the spirochaetes, are found between two membranes in the periplasmic space. They have a distinctive helical body that twists about as it moves.
Two other types of bacterial motion are called twitching motility that relies on a structure called the type IV pilus, and gliding motility, that uses other mechanisms. In twitching motility, the rod-like pilus extends out from the cell, binds some substrate, and then retracts, pulling the cell forward.
Motile bacteria are attracted or repelled by certain stimuli in behaviours called taxes: these include chemotaxis, phototaxis, energy taxis, and magnetotaxis. In one peculiar group, the myxobacteria, individual bacteria move together to form waves of cells that then differentiate to form fruiting bodies containing spores. The myxobacteria move only when on solid surfaces, unlike E. coli, which is motile in liquid or solid media.
Several Listeria and Shigella species move inside host cells by usurping the cytoskeleton, which is normally used to move organelles inside the cell. By promoting actin polymerisation at one pole of their cells, they can form a kind of tail that pushes them through the host cell's cytoplasm.
Communication
A few bacteria have chemical systems that generate light. This bioluminescence often occurs in bacteria that live in association with fish, and the light probably serves to attract fish or other large animals.
Bacteria often function as multicellular aggregates known as biofilms, exchanging a variety of molecular signals for intercell communication and engaging in coordinated multicellular behaviour.
The communal benefits of multicellular cooperation include a cellular division of labour, accessing resources that cannot effectively be used by single cells, collectively defending against antagonists, and optimising population survival by differentiating into distinct cell types. For example, bacteria in biofilms can have more than five hundred times increased resistance to antibacterial agents than individual "planktonic" bacteria of the same species.
One type of intercellular communication by a molecular signal is called quorum sensing, which serves the purpose of determining whether the local population density is sufficient to support investment in processes that are only successful if large numbers of similar organisms behave similarly, such as excreting digestive enzymes or emitting light. Quorum sensing enables bacteria to coordinate gene expression and to produce, release, and detect autoinducers or pheromones that accumulate with the growth in cell population.
Classification and identification
Classification seeks to describe the diversity of bacterial species by naming and grouping organisms based on similarities. Bacteria can be classified on the basis of cell structure, cellular metabolism or on differences in cell components, such as DNA, fatty acids, pigments, antigens and quinones. While these schemes allowed the identification and classification of bacterial strains, it was unclear whether these differences represented variation between distinct species or between strains of the same species. This uncertainty was due to the lack of distinctive structures in most bacteria, as well as lateral gene transfer between unrelated species. Due to lateral gene transfer, some closely related bacteria can have very different morphologies and metabolisms. To overcome this uncertainty, modern bacterial classification emphasises molecular systematics, using genetic techniques such as guanine cytosine ratio determination, genome-genome hybridisation, as well as sequencing genes that have not undergone extensive lateral gene transfer, such as the rRNA gene. Classification of bacteria is determined by publication in the International Journal of Systematic Bacteriology, and Bergey's Manual of Systematic Bacteriology. The International Committee on Systematic Bacteriology (ICSB) maintains international rules for the naming of bacteria and taxonomic categories and for the ranking of them in the International Code of Nomenclature of Bacteria.
Historically, bacteria were considered a part of the Plantae, the plant kingdom, and were called "Schizomycetes" (fission-fungi). For this reason, collective bacteria and other microorganisms in a host are often called "flora".
The term "bacteria" was traditionally applied to all microscopic, single-cell prokaryotes. However, molecular systematics showed prokaryotic life to consist of two separate domains, originally called Eubacteria and Archaebacteria, but now called Bacteria and Archaea that evolved independently from an ancient common ancestor. The archaea and eukaryotes are more closely related to each other than either is to the bacteria. These two domains, along with Eukarya, are the basis of the three-domain system, which is currently the most widely used classification system in microbiology. However, due to the relatively recent introduction of molecular systematics and a rapid increase in the number of genome sequences that are available, bacterial classification remains a changing and expanding field. For example, Cavalier-Smith argued that the Archaea and Eukaryotes evolved from Gram-positive bacteria.
The identification of bacteria in the laboratory is particularly relevant in medicine, where the correct treatment is determined by the bacterial species causing an infection. Consequently, the need to identify human pathogens was a major impetus for the development of techniques to identify bacteria.
The Gram stain, developed in 1884 by Hans Christian Gram, characterises bacteria based on the structural characteristics of their cell walls. The thick layers of peptidoglycan in the "Gram-positive" cell wall stain purple, while the thin "Gram-negative" cell wall appears pink. By combining morphology and Gram-staining, most bacteria can be classified as belonging to one of four groups (Gram-positive cocci, Gram-positive bacilli, Gram-negative cocci and Gram-negative bacilli). Some organisms are best identified by stains other than the Gram stain, particularly mycobacteria or Nocardia, which show acid fastness on Ziehl–Neelsen or similar stains. Other organisms may need to be identified by their growth in special media, or by other techniques, such as serology.
Culture techniques are designed to promote the growth and identify particular bacteria while restricting the growth of the other bacteria in the sample. Often these techniques are designed for specific specimens; for example, a sputum sample will be treated to identify organisms that cause pneumonia, while stool specimens are cultured on selective media to identify organisms that cause diarrhea while preventing growth of non-pathogenic bacteria. Specimens that are normally sterile, such as blood, urine or spinal fluid, are cultured under conditions designed to grow all possible organisms. Once a pathogenic organism has been isolated, it can be further characterised by its morphology, growth patterns (such as aerobic or anaerobic growth), patterns of hemolysis, and staining.
As with bacterial classification, identification of bacteria is increasingly using molecular methods, and mass spectroscopy. Most bacteria have not been characterised and there are many species that cannot be grown in the laboratory. Diagnostics using DNA-based tools, such as polymerase chain reaction, are increasingly popular due to their specificity and speed, compared to culture-based methods. These methods also allow the detection and identification of "viable but nonculturable" cells that are metabolically active but non-dividing. However, even using these improved methods, the total number of bacterial species is not known and cannot even be estimated with any certainty. Following present classification, there are a little less than 9,300 known species of prokaryotes, which includes bacteria and archaea; but attempts to estimate the true number of bacterial diversity have ranged from 107 to 109 total species—and even these diverse estimates may be off by many orders of magnitude.
Phyla
The following phyla have been validly published according to the Bacteriological Code:
Acidobacteriota
Actinomycetota
Aquificota
Armatimonadota
Atribacterota
Bacillota
Bacteroidota
Balneolota
Bdellovibrionota
Caldisericota
Calditrichota
Campylobacterota
Chlamydiota
Chlorobiota
Chloroflexota
Chrysiogenota
Coprothermobacterota
Cyanobacteriota
Deferribacterota
Deinococcota
Dictyoglomota
Elusimicrobiota
Fibrobacterota
Fusobacteriota
Gemmatimonadota
Ignavibacteriota
Kiritimatiellota
Lentisphaerota
Mycoplasmatota
Myxococcota
Nitrososphaerota
Nitrospinota
Nitrospirota
Planctomycetota
Pseudomonadota
Rhodothermota
Spirochaetota
Synergistota
Thermodesulfobacteriota
Thermomicrobiota
Thermoproteota
Thermotogota
Verrucomicrobiota
Interactions with other organisms
Despite their apparent simplicity, bacteria can form complex associations with other organisms. These symbiotic associations can be divided into parasitism, mutualism and commensalism.
Commensals
The word "commensalism" is derived from the word "commensal", meaning "eating at the same table" and all plants and animals are colonised by commensal bacteria. In humans and other animals, millions of them live on the skin, the airways, the gut and other orifices.
Referred to as "normal flora", or "commensals", these bacteria usually cause no harm but may occasionally invade other sites of the body and cause infection. Escherichia coli is a commensal in the human gut but can cause urinary tract infections. Similarly, streptococci, which are part of the normal flora of the human mouth, can cause heart disease.
Predators
Some species of bacteria kill and then consume other microorganisms; these species are called predatory bacteria. These include organisms such as Myxococcus xanthus, which forms swarms of cells that kill and digest any bacteria they encounter. Other bacterial predators either attach to their prey in order to digest them and absorb nutrients or invade another cell and multiply inside the cytosol. These predatory bacteria are thought to have evolved from saprophages that consumed dead microorganisms, through adaptations that allowed them to entrap and kill other organisms.
Mutualists
Certain bacteria form close spatial associations that are essential for their survival. One such mutualistic association, called interspecies hydrogen transfer, occurs between clusters of anaerobic bacteria that consume organic acids, such as butyric acid or propionic acid, and produce hydrogen, and methanogenic archaea that consume hydrogen. The bacteria in this association are unable to consume the organic acids as this reaction produces hydrogen that accumulates in their surroundings. Only the intimate association with the hydrogen-consuming archaea keeps the hydrogen concentration low enough to allow the bacteria to grow.
In soil, microorganisms that reside in the rhizosphere (a zone that includes the root surface and the soil that adheres to the root after gentle shaking) carry out nitrogen fixation, converting nitrogen gas to nitrogenous compounds. This serves to provide an easily absorbable form of nitrogen for many plants, which cannot fix nitrogen themselves. Many other bacteria are found as symbionts in humans and other organisms. For example, the presence of over 1,000 bacterial species in the normal human gut flora of the intestines can contribute to gut immunity, synthesise vitamins, such as folic acid, vitamin K and biotin, convert sugars to lactic acid (see Lactobacillus), as well as fermenting complex undigestible carbohydrates. The presence of this gut flora also inhibits the growth of potentially pathogenic bacteria (usually through competitive exclusion) and these beneficial bacteria are consequently sold as probiotic dietary supplements.
Nearly all animal life is dependent on bacteria for survival as only bacteria and some archaea possess the genes and enzymes necessary to synthesise vitamin B12, also known as cobalamin, and provide it through the food chain. Vitamin B12 is a water-soluble vitamin that is involved in the metabolism of every cell of the human body. It is a cofactor in DNA synthesis and in both fatty acid and amino acid metabolism. It is particularly important in the normal functioning of the nervous system via its role in the synthesis of myelin.
Pathogens
The body is continually exposed to many species of bacteria, including beneficial commensals, which grow on the skin and mucous membranes, and saprophytes, which grow mainly in the soil and in decaying matter. The blood and tissue fluids contain nutrients sufficient to sustain the growth of many bacteria. The body has defence mechanisms that enable it to resist microbial invasion of its tissues and give it a natural immunity or innate resistance against many microorganisms. Unlike some viruses, bacteria evolve relatively slowly so many bacterial diseases also occur in other animals.
If bacteria form a parasitic association with other organisms, they are classed as pathogens. Pathogenic bacteria are a major cause of human death and disease and cause infections such as tetanus (caused by Clostridium tetani), typhoid fever, diphtheria, syphilis, cholera, foodborne illness, leprosy (caused by Mycobacterium leprae) and tuberculosis (caused by Mycobacterium tuberculosis). A pathogenic cause for a known medical disease may only be discovered many years later, as was the case with Helicobacter pylori and peptic ulcer disease. Bacterial diseases are also important in agriculture, and bacteria cause leaf spot, fire blight and wilts in plants, as well as Johne's disease, mastitis, salmonella and anthrax in farm animals.
Each species of pathogen has a characteristic spectrum of interactions with its human hosts. Some organisms, such as Staphylococcus or Streptococcus, can cause skin infections, pneumonia, meningitis and sepsis, a systemic inflammatory response producing shock, massive vasodilation and death. Yet these organisms are also part of the normal human flora and usually exist on the skin or in the nose without causing any disease at all. Other organisms invariably cause disease in humans, such as Rickettsia, which are obligate intracellular parasites able to grow and reproduce only within the cells of other organisms. One species of Rickettsia causes typhus, while another causes Rocky Mountain spotted fever. Chlamydia, another phylum of obligate intracellular parasites, contains species that can cause pneumonia or urinary tract infection and may be involved in coronary heart disease. Some species, such as Pseudomonas aeruginosa, Burkholderia cenocepacia, and Mycobacterium avium, are opportunistic pathogens and cause disease mainly in people who are immunosuppressed or have cystic fibrosis. Some bacteria produce toxins, which cause diseases. These are endotoxins, which come from broken bacterial cells, and exotoxins, which are produced by bacteria and released into the environment. The bacterium Clostridium botulinum for example, produces a powerful exotoxin that cause respiratory paralysis, and Salmonellae produce an endotoxin that causes gastroenteritis. Some exotoxins can be converted to toxoids, which are used as vaccines to prevent the disease.
Bacterial infections may be treated with antibiotics, which are classified as bacteriocidal if they kill bacteria or bacteriostatic if they just prevent bacterial growth. There are many types of antibiotics, and each class inhibits a process that is different in the pathogen from that found in the host. An example of how antibiotics produce selective toxicity are chloramphenicol and puromycin, which inhibit the bacterial ribosome, but not the structurally different eukaryotic ribosome. Antibiotics are used both in treating human disease and in intensive farming to promote animal growth, where they may be contributing to the rapid development of antibiotic resistance in bacterial populations. Infections can be prevented by antiseptic measures such as sterilising the skin prior to piercing it with the needle of a syringe, and by proper care of indwelling catheters. Surgical and dental instruments are also sterilised to prevent contamination by bacteria. Disinfectants such as bleach are used to kill bacteria or other pathogens on surfaces to prevent contamination and further reduce the risk of infection.
Significance in technology and industry
Bacteria, often lactic acid bacteria, such as Lactobacillus species and Lactococcus species, in combination with yeasts and moulds, have been used for thousands of years in the preparation of fermented foods, such as cheese, pickles, soy sauce, sauerkraut, vinegar, wine, and yogurt.
The ability of bacteria to degrade a variety of organic compounds is remarkable and has been used in waste processing and bioremediation. Bacteria capable of digesting the hydrocarbons in petroleum are often used to clean up oil spills. Fertiliser was added to some of the beaches in Prince William Sound in an attempt to promote the growth of these naturally occurring bacteria after the 1989 Exxon Valdez oil spill. These efforts were effective on beaches that were not too thickly covered in oil. Bacteria are also used for the bioremediation of industrial toxic wastes. In the chemical industry, bacteria are most important in the production of enantiomerically pure chemicals for use as pharmaceuticals or agrichemicals.
Bacteria can also be used in place of pesticides in biological pest control. This commonly involves Bacillus thuringiensis (also called BT), a Gram-positive, soil-dwelling bacterium. Subspecies of this bacteria are used as Lepidopteran-specific insecticides under trade names such as Dipel and Thuricide. Because of their specificity, these pesticides are regarded as environmentally friendly, with little or no effect on humans, wildlife, pollinators, and most other beneficial insects.
Because of their ability to quickly grow and the relative ease with which they can be manipulated, bacteria are the workhorses for the fields of molecular biology, genetics, and biochemistry. By making mutations in bacterial DNA and examining the resulting phenotypes, scientists can determine the function of genes, enzymes, and metabolic pathways in bacteria, then apply this knowledge to more complex organisms. This aim of understanding the biochemistry of a cell reaches its most complex expression in the synthesis of huge amounts of enzyme kinetic and gene expression data into mathematical models of entire organisms. This is achievable in some well-studied bacteria, with models of Escherichia coli metabolism now being produced and tested. This understanding of bacterial metabolism and genetics allows the use of biotechnology to bioengineer bacteria for the production of therapeutic proteins, such as insulin, growth factors, or antibodies.
Because of their importance for research in general, samples of bacterial strains are isolated and preserved in Biological Resource Centres. This ensures the availability of the strain to scientists worldwide.
History of bacteriology
Bacteria were first observed by the Dutch microscopist Antonie van Leeuwenhoek in 1676, using a single-lens microscope of his own design. He then published his observations in a series of letters to the Royal Society of London. Bacteria were Leeuwenhoek's most remarkable microscopic discovery. Their size was just at the limit of what his simple lenses could resolve, and, in one of the most striking hiatuses in the history of science, no one else would see them again for over a century. His observations also included protozoans which he called animalcules, and his findings were looked at again in the light of the more recent findings of cell theory.
Christian Gottfried Ehrenberg introduced the word "bacterium" in 1828. In fact, his Bacterium was a genus that contained non-spore-forming rod-shaped bacteria, as opposed to Bacillus, a genus of spore-forming rod-shaped bacteria defined by Ehrenberg in 1835.
Louis Pasteur demonstrated in 1859 that the growth of microorganisms causes the fermentation process and that this growth is not due to spontaneous generation (yeasts and molds, commonly associated with fermentation, are not bacteria, but rather fungi). Along with his contemporary Robert Koch, Pasteur was an early advocate of the germ theory of disease. Before them, Ignaz Semmelweis and Joseph Lister had realised the importance of sanitised hands in medical work. Semmelweis, who in the 1840s formulated his rules for handwashing in the hospital, prior to the advent of germ theory, attributed disease to "decomposing animal organic matter". His ideas were rejected and his book on the topic condemned by the medical community. After Lister, however, doctors started sanitising their hands in the 1870s.
Robert Koch, a pioneer in medical microbiology, worked on cholera, anthrax and tuberculosis. In his research into tuberculosis, Koch finally proved the germ theory, for which he received a Nobel Prize in 1905. In Koch's postulates, he set out criteria to test if an organism is the cause of a disease, and these postulates are still used today.
Ferdinand Cohn is said to be a founder of bacteriology, studying bacteria from 1870. Cohn was the first to classify bacteria based on their morphology.
Though it was known in the nineteenth century that bacteria are the cause of many diseases, no effective antibacterial treatments were available. In 1910, Paul Ehrlich developed the first antibiotic, by changing dyes that selectively stained Treponema pallidum—the spirochaete that causes syphilis—into compounds that selectively killed the pathogen. Ehrlich, who had been awarded a 1908 Nobel Prize for his work on immunology, pioneered the use of stains to detect and identify bacteria, with his work being the basis of the Gram stain and the Ziehl–Neelsen stain.
A major step forward in the study of bacteria came in 1977 when Carl Woese recognised that archaea have a separate line of evolutionary descent from bacteria. This new phylogenetic taxonomy depended on the sequencing of 16S ribosomal RNA and divided prokaryotes into two evolutionary domains, as part of the three-domain system.
See also
Bacteriohopanepolyol
Genetically modified bacteria
Marine prokaryotes
References
Bibliography
External links
On-line text book on bacteriology (2015)
Bacteriology
Domains (biology)
Biology terminology | Bacteria | Biology | 10,431 |
2,560,565 | https://en.wikipedia.org/wiki/Mobile%20music | Mobile music is music which can be transported, or in other words, mobile. The term itself is a quite ambiguous.
An outdated definition is as follows; 'mobile music is music which is downloaded or streamed to mobile phones and played by mobile phones. Although many phones play music as ringtones, true "music phones" generally allow users to stream music or download music files over the internet via a WiFi connection or 3G cell phone connection. Music phones are also able to import audio files from their PCs. The case of mobile music being stored within the memory of the mobile phone is the case similar to traditional business models in the music industry. It supports two variants: the user can either purchase the music for outright ownership or access entire libraries of music via a subscription model. In this case the music files are available as long as the subscription is active.'
Truetones
While ringtones do not include artists voices, truetones, chaku-uta and chaku-uta full are recordings of artists' interpretation of music. Distributing them usually requires the agreement of record labels and other owners of artists' rights.
History
Mobile music is technically any form of music which can be moved. This includes musical instruments. This article, however, does not go over the history of musical instruments, as there are already articles on such.
Physical hardware
Radios
Radios is one of the earliest forms of technology based mobile music.
Cassette tape players
The most prominent and iconic piece of mobile music is Sony's Walkman. Another term for it is a mobile cassette tape player, a product which multiple companies created.
CD players
The mobile CD player, such as Sony's Discman, is another innovation in the realm of mobile music.
MP3 players
The MP3 is step towards a truly digital age of music.
Mobile phones
The integration of music in a cellphone was not easy. On one hand, technology for portable music had been developed since the 1980s with Sony driving the area with its portable walkman. On the other, cellphone technology had focused on the area of imaging, leveraging the user interest in taking pictures and the operator's need to drive data revenues through the use of its network. The success of ringtones in driving data revenues had placed operators on guard for interactive applications that could drive revenues. Nevertheless, slow data speeds in the GSM and CDMA areas which had 1 and 2G technology, prevented the economic download of music data through networks in comparison with media sites to a computer. So operators, which tended to subsidize phones with data capabilities focused more on ringtone, SMS, and picture phones than on music ready phones, and this prevented many manufacturers to develop those phones because their primary customer is the operator and not the user. Work on compression algorithms for music was extensive with AMR trying to push the envelope, but the revolution of Napster proliferated the world with the MP3 format and manufacturers began to take notice. Another issue was the development of DRM capabilities which helped prevent music piracy and gave mobile music more of a legal status. At that time, Apple was revolutionizing the world with the introduction of the new iPods and its iTunes Store.
The first report on a business plan and need for the successful integration of Music Phones was written in 2004 by Strategy Analytics - "Music phones are key for 3G", a cellular consulting firm in Massachusetts. The report boosted the need for phone manufacturers like Nokia and Motorola to join the bandwagon and explore several music options including the development of a music store strategy by Nokia and the integration of iTunes into a phone by Motorola with its Rocker. Sony, Samsung, and LG were too busy focusing on increasing pixelation and stability within CDDMA camera modules. Sony tried to leverage the Cybershot technology in a multimedia strategy, but it was too slow of a change. While Samsung was driving the high tier segment improving display capability. Nokia worked hard to drive DRM technology to be included into its OVI music store and introduced a new music phone line called Xpress music banking on end user needs rather than on operator's wants because the line was expected to receive lower subsidies from operators than others. In this way, Nokia was banking on the Idea of the report that music could be used to drive customer acquisition at that time rather than data revenues for the time, as mentioned in the report from Strategy Analytics. The Rocker was a success driving new adherents into a highly competitive US market even though it still remained tied to a computer for music downloads. It could be said that it was the adoption of the Rocker by ATT as an acquisition strategy for the US market that prompted operators to purchase music capable phones and manufacturers to develop them. This success of ATT to drive acquisitions was copied by other operators such as Verizon, Sprint, and T-Mobile, which also drove the introduction of music into the cellphones. Two years after the Rocker, Apple introduced its iPhone and things went on its way. Today, most cell phones incorporate music capabilities which have also been transferred to the smartphones. The built-in app that you use to play music on the iPhone or iPod touch is called Music (on iOS 5 or higher) or iPod (on iOS 4 or lower). While many apps offer music, this is the most common and the one that, for many people, will be the only music app they need.
iPod/Zune
The iPod and Zune are two similar pieces of technology.
Smartphones
Smartphone is the most revolutionary piece of technology to date in regards for mobile music, allowing access to nearly every artist at the click of a button.
Software
Napster
Napster laid the groundwork for the coming wave of streaming audio services.
Music streaming
Music streaming became prominent in the 2000s. Few Prominent Music Streaming Platforms
Gaana
Spotify
Wynk Music
JioSaavn
Social issues
Pirating
Pirating is another issue artists face. Napster was the first large scale system for such, laying grounds for streaming.
Lack of income for artists
The introduction of streaming has caused artists to be unable to support themselves through their music and its sales alone. The pandemic made people quickly aware of this issue.
Social impact
Mobile music has become prominent in daily life to the point where studies have been done over such.
See also
Musical instrument
Ringtone
References
The Best Smartphones for Music
Mobile content
Mobile telecommunications | Mobile music | Technology | 1,277 |
3,182,990 | https://en.wikipedia.org/wiki/David%20Chilton%20Phillips | David Chilton Phillips, Baron Phillips of Ellesmere (7 March 1924 – 23 February 1999) was a pioneering, British structural biologist and an influential figure in science and government.
Education and early life
David was the son of Charles Harry Phillips, a master tailor and Methodist preacher, and his wife, Edith Harriet Finney, a midwife. His mother's father was Samuel Finney, a coal miner, union official and Member of Parliament.
He was born in Ellesmere, Shropshire which gave rise to his title Baron Phillips of Ellesmere. He was educated at Oswestry High School for Boys and then at the University College of South Wales and Monmouth where he studied physics, electrical engineering, and mathematics. His degree was interrupted between 1944 and 1947 for service in the Royal Navy as a radar officer on HMS Illustrious. He returned to Cardiff to complete his degree (BSc in 1948) and then undertook postgraduate studies with Arthur Wilson. He was awarded his PhD in 1951.
Career and research
After a postdoctoral period at the National Research Council in Ottawa (1951–55) he joined the Royal Institution. In 1966 he was appointed Professor of Molecular Biophysics in the Department of Zoology at the University of Oxford where he remained until his retirement in 1990. During that time he was elected a Fellow of the Royal Society (FRS) serving as Biological Secretary from 1976 to 1983.
Phillips lead the team which determined in atomic detail the structure of the enzyme lysozyme, which he did in the Davy Faraday Research Laboratories of the Royal Institution in London in 1965. Lysozyme, which was discovered in 1922 by Alexander Fleming, is found in tear drops, nasal mucus, gastric secretions and egg white. Lysozyme exhibits some antibacterial activity so that the discovery of its structure and mode of action were key scientific objectives. David Phillips solved the structure of lysozyme and also explained the mechanism of its action in destroying certain bacteria by a brilliant application of the technique of X-ray crystallography, a technique to which he had been introduced as a PhD student at the University in Cardiff, and to which he later made major instrumental contributions.
Honours and awards
Phillips was made a Knight Bachelor in the 1979 Birthday Honours, invested as Knight Commander of the Order of the British Empire (KBE) in the 1989 New Year Honours, and created a Life Peer as Baron Phillips of Ellesmere, of Ellesmere in the County of Shropshire on 14 July 1994. In the House of Lords, he chaired the select committee on Science and Technology and he is credited with getting Parliament onto the World Wide Web. In 1994, he was awarded an Honorary Degree (Doctor of Science) by the University of Bath.
In 1980 he was invited to deliver a series of Royal Institution Christmas Lecture on The Chicken, the Egg and the Molecules.
Personal life
In 1960 Phillips married Diana Hutchinson. Phillips died of prostate cancer, on 23 February 1999. He was diagnosed in 1988.
References
1924 births
1999 deaths
20th-century British biologists
Structural biologists
Deaths from prostate cancer in England
Fellows of the Royal Society
Foreign associates of the National Academy of Sciences
Fullerian Professors of Physiology
History of X-rays
Knights Commander of the Order of the British Empire
Crossbench life peers
Scientists from Shropshire
People from Ellesmere, Shropshire
Royal Medal winners
Wolf Prize in Chemistry laureates
Knights Bachelor
Presidents of the British Crystallographic Association
Members of the Royal Swedish Academy of Sciences
Life peers created by Elizabeth II | David Chilton Phillips | Chemistry | 701 |
1,873,093 | https://en.wikipedia.org/wiki/Maternal%20impression | The conception of a maternal impression rests on the belief that a powerful mental (or sometimes physical) influence working on the mother's mind may produce an impression, either general or definite, on the child she is carrying. The child might be said to be "marked" as a result.
Medicine
Maternal impression, according to a long-discredited medical theory, was a phenomenon that explained the existence of birth defects and congenital disorders. The theory stated that an emotional stimulus experienced by a pregnant woman could influence the development of the fetus. For example, it was sometimes supposed that the mother of the Elephant Man was frightened by an elephant during her pregnancy, thus "imprinting" the memory of the elephant onto the gestating fetus. Mental problems, such as schizophrenia and depression, were believed to be a manifestation of similar disordered feelings in the mother. For instance, a pregnant woman who experienced great sadness might imprint depressive tendencies onto the fetus in her womb.
The theory of maternal impression was largely abandoned by the 20th century, with the development of modern genetic theory.
Folklore
In folklore, maternal imprinting, or Versehen (a German noun meaning "inadvertence" or as a verb "to provide") as it is usually called, is the belief that a sudden fear of some object or animal in a pregnant woman can cause her child to bear the mark of it.
Some of the more vivid examples are given in Vance Randolph's Ozark Superstitions:Children are also said to be marked by some sudden fright or unpleasant experience of the mother, and I have myself seen a pop-eyed, big-mouthed idiot whose condition is ascribed to the fact that his mother stepped on a toad several months before his birth. In another case, a large red mark on a baby's cheek was caused by the mother seeing a man shot down at her side, when the discharge of the gun threw some of the blood and brains into her face.Other explanations claimed that birthmarks shaped like food were the direct result of the mother's pregnancy cravings, or the mother touching a certain part of her body during a solar eclipse – her child's birthmark will be in the same location. Still others warn against the pregnant mother's viewing any image of a satyr or similar spirit, as the child may be born with a similar appearance.
Oswald Spengler understood maternal imprinting to be a folkloric understanding of what he called "blood feeling" or the formation of a group aesthetic of a bodily ideal:
What is called the Versehen of a pregnant woman is only a particular and not very important instance of the workings of a very deep and powerful formative principle inherent in all that is of the race side. It is a matter of common observation that elderly married people become strangely like one another, although probably Science with its measuring instruments would "prove" the exact opposite. It is impossible to exaggerate the formative power of this living pulse, this strong inward feeling for the perfection of one's own type. The feeling for race-beauty—so opposite to the conscious taste of ripe urbans for intellectual-individual traits of beauty—is immensely strong in primitive men, and for that very reason never emerges into their consciousness. But such a feeling is race-forming. It undoubtedly molded the warrior- and hero-type of a nomad tribe more and definitely on one bodily ideal, so that it would have been quite unambiguous to speak of the race-figure of Romans or Ostrogoths.
Pliny the Elder also comments at length about the phenomenon of postpartum maternal impression in bears, i.e., the folk belief that newborn bears must be licked and molded into bear-shape by their mothers.
Literature
Examples of maternal impression in literature can be found in the Aethiopica of Heliodorus of Emesa and in The Life and Opinions of Tristram Shandy, Gentleman by Laurence Sterne.
See also
Lihi
Cortisol#Effects during pregnancy
Epigenetics
Fetal origins hypothesis
Fetal origins of adult disease
Lamarckism
Mary Toft
Mooncalf
Pseudoscience
Sooterkin
Telegony
References
Bibliography
Wendy Doniger and Gregory Spinner. "Misconceptions: Parental Imprinting" in "Science in Culture" edited by Peter Louis Galison, Stephen Richards Graubard, Everett Mendelsohn, Transaction Publishers, 2001
Lily Weiser-Aall. "Svangerskap og Fodsel i Nyere Norsk Tradisjon" in Folklore, Vol. 82, No. 4 (Winter, 1971), pp. 339–40
Patricia R. Stokes. "Pathology, Danger, and Power: Women's and Physicians' Views of Pregnancy and Childbirth in Weimar Germany" in Social History of Medicine 2000 Vol. 13 (#3)
Katharine Park. “Impressed Images: Reproducing Wonders,” in Caroline A. Jones and Peter Galison, eds., Picturing Science, Producing Art, New York: Routledge, 1998, 254–71.
Hiro Hirai. "Imagination, Maternal Desire and Embryology in Thomas Fienus," in G. Manning and C. Klestinec, eds., Professors, Physicians and Practices in the History of Medicine, Cham: Springer, 2017, 211–225.
Obsolete medical theories
Obsolete biology theories
Superstitions
Folklore | Maternal impression | Biology | 1,096 |
7,944,450 | https://en.wikipedia.org/wiki/Polyphenylsulfone | Polyphenylsulfone (PPSF or PPSU) is a high performance polymer made of aromatic rings linked by sulfone (SO2) groups.
Production
Commercially important polysulfones are prepared by condensation of 4,4'-bis(chlorophenyl)sulfone with various bisphenols. Two bisphenols for this application are bisphenol A (the polymer being called PSF) and 4,4'-bis(4-hydroxyphenyl)sulfone (the polymer being called PES).
Applications
PPSF is a moldable plastic often used in rapid prototyping and rapid manufacturing (direct digital manufacturing) applications. Polyphenylsulfone is heat and chemical-resistant suited for automotive, aerospace, and plumbing applications. Polyphenylsulfone has no melting point, reflecting its amorphous nature, and offers tensile strength up to 55 MPa (8000 psi). Its commercial name is Radel. In plumbing applications, polyphenylsulfone fittings have been found to sometimes form cracks prematurely or to experience failure when improperly installed using non-manufacturer approved installation methods or systems.
References
3D printing
Plastics
Benzosulfones | Polyphenylsulfone | Physics,Chemistry | 260 |
342,520 | https://en.wikipedia.org/wiki/Fuel%20efficiency | Fuel efficiency (or fuel economy) is a form of thermal efficiency, meaning the ratio of effort to result of a process that converts chemical potential energy contained in a carrier (fuel) into kinetic energy or work. Overall fuel efficiency may vary per device, which in turn may vary per application, and this spectrum of variance is often illustrated as a continuous energy profile. Non-transportation applications, such as industry, benefit from increased fuel efficiency, especially fossil fuel power plants or industries dealing with combustion, such as ammonia production during the Haber process.
In the context of transport, fuel economy is the energy efficiency of a particular vehicle, given as a ratio of distance traveled per unit of fuel consumed. It is dependent on several factors including engine efficiency, transmission design, and tire design. In most countries, using the metric system, fuel economy is stated as "fuel consumption" in liters per 100 kilometers (L/100 km) or kilometers per liter (km/L or kmpl). In a number of countries still using other systems, fuel economy is expressed in miles per gallon (mpg), for example in the US and usually also in the UK (imperial gallon); there is sometimes confusion as the imperial gallon is 20% larger than the US gallon so that mpg values are not directly comparable. Traditionally, litres per mil were used in Norway and Sweden, but both have aligned to the EU standard of L/100 km.
Fuel consumption is a more accurate measure of a vehicle's performance because it is a linear relationship while fuel economy leads to distortions in efficiency improvements. Weight-specific efficiency (efficiency per unit weight) may be stated for freight, and passenger-specific efficiency (vehicle efficiency per passenger) for passenger vehicles.
Vehicle design
Fuel efficiency is dependent on many parameters of a vehicle, including its engine parameters, aerodynamic drag, weight, AC usage, fuel and rolling resistance. There have been advances in all areas of vehicle design in recent decades. Fuel efficiency of vehicles can also be improved by careful maintenance and driving habits.
Hybrid vehicles use two or more power sources for propulsion. In many designs, a small combustion engine is combined with electric motors. Kinetic energy which would otherwise be lost to heat during braking is recaptured as electrical power to improve fuel efficiency. The larger batteries in these vehicles power the car's electronics, allowing the engine to shut off and avoid prolonged idling.
Fleet efficiency
Fleet efficiency describes the average efficiency of a population of vehicles. Technological advances in efficiency may be offset by a change in buying habits with a propensity to heavier vehicles that are less fuel-efficient.
Energy efficiency terminology
Energy efficiency is similar to fuel efficiency but the input is usually in units of energy such as megajoules (MJ), kilowatt-hours (kW·h), kilocalories (kcal) or British thermal units (BTU). The inverse of "energy efficiency" is "energy intensity", or the amount of input energy required for a unit of output such as MJ/passenger-km (of passenger transport), BTU/ton-mile or kJ/t-km (of freight transport), GJ/t (for production of steel and other materials), BTU/(kW·h) (for electricity generation), or litres/100 km (of vehicle travel). Litres per 100 km is also a measure of "energy intensity" where the input is measured by the amount of fuel and the output is measured by the distance travelled. For example: Fuel economy in automobiles.
Given a heat value of a fuel, it would be trivial to convert from fuel units (such as litres of gasoline) to energy units (such as MJ) and conversely. But there are two problems with comparisons made using energy units:
There are two different heat values for any hydrogen-containing fuel which can differ by several percent (see below).
When comparing transportation energy costs, a kilowatt hour of electric energy may require an amount of fuel with heating value of 2 or 3 kilowatt hours to produce it.
Energy content of fuel
The specific energy content of a fuel is the heat energy obtained when a certain quantity is burned (such as a gallon, litre, kilogram). It is sometimes called the heat of combustion. There exists two different values of specific heat energy for the same batch of fuel. One is the high (or gross) heat of combustion and the other is the low (or net) heat of combustion. The high value is obtained when, after the combustion, the water in the exhaust is in liquid form. For the low value, the exhaust has all the water in vapor form (steam). Since water vapor gives up heat energy when it changes from vapor to liquid, the liquid water value is larger since it includes the latent heat of vaporization of water. The difference between the high and low values is significant, about 8 or 9%. This accounts for most of the apparent discrepancy in the heat value of gasoline. In the U.S. (and the table) the high heat values have traditionally been used, but in many other countries, the low heat values are commonly used.
Neither the gross heat of combustion nor the net heat of combustion gives the theoretical amount of mechanical energy (work) that can be obtained from the reaction. (This is given by the change in Gibbs free energy, and is around 45.7 MJ/kg for gasoline.) The actual amount of mechanical work obtained from fuel (the inverse of the specific fuel consumption) depends on the engine. A figure of 17.6 MJ/kg is possible with a gasoline engine, and 19.1 MJ/kg for a diesel engine. See Brake-specific fuel consumption for more information.
Transportation
Fuel efficiency of motor vehicles
Driving technique
Advanced technology
The most efficient machines for converting energy to rotary motion are electric motors, as used in electric vehicles. However, electricity is not a primary energy source so the efficiency of the electricity production has also to be taken into account. Railway trains can be powered using electricity, delivered through an additional running rail, overhead catenary system or by on-board generators used in diesel-electric locomotives as common on the US and UK rail networks. Pollution produced from centralised generation of electricity is emitted at a distant power station, rather than "on site". Pollution can be reduced by using more railway electrification and low carbon power for electricity. Some railways, such as the French SNCF and Swiss federal railways derive most, if not 100% of their power, from hydroelectric or nuclear power stations, therefore atmospheric pollution from their rail networks is very low. This was reflected in a study by AEA Technology between a Eurostar train and airline journeys between London and Paris, which showed the trains on average emitting 10 times less CO2, per passenger, than planes, helped in part by French nuclear generation.
Hydrogen fuel cells
In the future, hydrogen cars may be commercially available. Toyota is test-marketing vehicles powered by hydrogen fuel cells in southern California, where a series of hydrogen fueling stations has been established. Powered either through chemical reactions in a fuel cell that create electricity to drive very efficient electrical motors or by directly burning hydrogen in a combustion engine (near identically to a natural gas vehicle, and similarly compatible with both natural gas and gasoline); these vehicles promise to have near-zero pollution from the tailpipe (exhaust pipe). Potentially the atmospheric pollution could be minimal, provided the hydrogen is made by electrolysis using electricity from non-polluting sources such as solar, wind or hydroelectricity or nuclear. Commercial hydrogen production uses fossil fuels and produces more carbon dioxide than hydrogen.
Because there are pollutants involved in the manufacture and destruction of a car and the production, transmission and storage of electricity and hydrogen, the label "zero pollution" applies only to the car's conversion of stored energy into movement.
In 2004, a consortium of major auto-makers — BMW, General Motors, Honda, Toyota and Volkswagen/Audi — came up with "Top Tier Detergent Gasoline Standard" to gasoline brands in the US and Canada that meet their minimum standards for detergent content and do not contain metallic additives. Top Tier gasoline contains higher levels of detergent additives in order to prevent the build-up of deposits (typically, on fuel injector and intake valve) known to reduce fuel economy and engine performance.
In microgravity
How fuel combusts affects how much energy is produced. The National Aeronautics and Space Administration (NASA) has investigated fuel consumption in microgravity.
The common distribution of a flame under normal gravity conditions depends on convection, because soot tends to rise to the top of a flame, such as in a candle, making the flame yellow. In microgravity or zero gravity, such as an environment in outer space, convection no longer occurs, and the flame becomes spherical, with a tendency to become more blue and more efficient. There are several possible explanations for this difference, of which the most likely one given is the hypothesis that the temperature is evenly distributed enough that soot is not formed and complete combustion occurs., National Aeronautics and Space Administration, April 2005. Experiments by NASA in microgravity reveal that diffusion flames in microgravity allow more soot to be completely oxidised after they are produced than diffusion flames on Earth, because of a series of mechanisms that behaved differently in microgravity when compared to normal gravity conditions.LSP-1 experiment results, National Aeronautics and Space Administration, April 2005. Premixed flames in microgravity burn at a much slower rate and more efficiently than even a candle on Earth, and last much longer.
See also
References
External links
US Government website on fuel economy
UK DfT comparisons on road and rail
NASA Offers a $1.5 Million Prize for a Fast and Fuel-Efficient Aircraft
Car Fuel Consumption Official Figures
Spritmonitor.de "the most fuel efficient cars" - Database of thousands of (mostly German) car owners' actual fuel consumption figures (cf. Spritmonitor)
Searchable fuel economy data from the EPA - United States Environmental Protection Agency
penghemat bbm - Alat penghemat bbm
Ny Times: A Road Test of Alternative Fuel Visions
Energy economics
Physical quantities
Energy efficiency
Transport economics | Fuel efficiency | Physics,Mathematics,Environmental_science | 2,084 |
25,008,653 | https://en.wikipedia.org/wiki/27%20Hydrae | 27 Hydrae is a triple star system system in the equatorial constellation of Hydra, located 222 light years away from the Sun. It is visible to the naked eye as a faint, orange-hued star with a combined apparent visual magnitude of 4.82. The system is moving further from the Earth with a heliocentric radial velocity of +25.6 km/s.
The magnitude 4.91 primary, component A, is an aging giant star with a stellar classification of K0 III. It is a red clump giant, which indicates it is on the horizontal branch and is generating energy through helium fusion at its core. The star is 1.9 billion years old with 2.17 times the mass of the Sun. It has swelled to 11 times the Sun's radius and is radiating 57.5 times the Sun's luminosity from its enlarged photosphere at an effective temperature of 4,965 K. The star is suspected to host a low-mass companion.
The stellar companions to this star, designated components B and C, lie at an angular separation of from the primary, and form a binary pair with a separation of 9.20″ as of 2015. The brighter member of the pair, component B, is a seventh magnitude F-type main-sequence star with a class of F4 V, while its companion is an eleventh magnitude K-type main-sequence star with a class of K2 V.
Substellar companion
The Okayama Planet Search team published a paper in late 2008 reporting investigations into radial velocity variations observed for a set of evolved stars, showing hints of a substellar companion orbiting the primary member of the wide binary system 27 Hydrae. Its orbital period is estimated at 9.3 years, but no planet has been confirmed yet.
References
K-type giants
F-type main-sequence stars
K-type main-sequence stars
Horizontal-branch stars
Triple star systems
Hypothetical planetary systems
Hydra (constellation)
Hydrae, P
Durchmusterung objects
Hydrae, 27
080586
045811
3709 | 27 Hydrae | Astronomy | 418 |
58,473,010 | https://en.wikipedia.org/wiki/Alternaria%20brassicicola | Alternaria brassicicola is a fungal necrotrophic plant pathogen that causes black spot disease on a wide range of hosts, particularly in the genus of Brassica, including a number of economically important crops such as cabbage, Chinese cabbage, cauliflower, oilseeds, broccoli and canola. Although mainly known as a significant plant pathogen, it also contributes to various respiratory allergic conditions such as asthma and rhinoconjunctivitis. Despite the presence of mating genes, no sexual reproductive stage has been reported for this fungus. In terms of geography, it is most likely to be found in tropical and sub-tropical regions, but also in places with high rain and humidity such as Poland. It has also been found in Taiwan and Israel. Its main mode of propagation is vegetative. The resulting conidia reside in the soil, air and water. These spores are extremely resilient and can overwinter on crop debris and overwintering herbaceous plants.
Growth and morphology
The conidia of A. brassicicola are abundant in the outdoor environment from the months of May to late October in the northern hemisphere, peaking in June and again in October. The conidia are dark brown and smooth-walled, up to 60 x 14μm. The conidia are cylindrical to oblong in shape and are muriform and produced in chains of 8-10 spores. They are firmly attached to conidiophores that are olive-brown, septate, and growing to an upper range of 100-200 μm, although this overall length may vary. Conidia are borne in continuous, chain-like structure, but branching at the base has also been observed. Although conidia can be spread by rain, the most common means of spread is through the air. The fungus grows on epidermal leaf wax of plants, particularly those in the Brassicaceae, and prefers an environment with high humidity and temperature range of . Macroscopically, the mycelium exhibits a range of colour: unpigmented when young, to olive-grey, grey-black at maturity. Colonies of A. brassicicola tend to be dark brown or black in colour.
Research history
Historically, much of the early research concerning the fungus was based on plant defense mechanisms. However, once its genome was sequenced, efforts shifted to identifying the genes involved in host-parasite interaction. One of the pioneers for genetic research into Alternaria brassicicola was the Lawrence group at Virginia Bioinformatics Institute and the Genome Center at Washington University. The most common media used for A. brassicicola growth are PDA (potato dextrose agar) and V8 juice-agar. In vitro and under optimal conditions, colonies grow rapidly and appear dark green or white-grey. Spontaneous sporulation occurs at 25°C in darkness on PDA medium.
Growth cycle
Hours after inoculation:
2h: Conidia swells
3h: Germ tube formation observed at the apical or middle cells of conidia
8h: Vesicle of dissolved contents moves from conidial cell to germ tube
20h: Infection of the host cell
48h: Mycelial network develops on the surface
72h: Many chains of conidia can be seen
Pathogenesis and infection
There are three main sources of infection: nearby infected seeds, spores from plant debris in the topsoil and Brassica weeds, and spores moved by wind and air from farther away. Infected leaves can spread their spores up to a diameter of 1800m. There are also three major entry points to the host cell: epidermal penetration, stomatal penetration and penetration through an insect. Contact with the host cell triggers the release of various cell wall degrading enzymes which allow the fungus to attach itself to the plant and begin degradation. The suggested mode of attack is through host-specific toxins, primarily AB toxins, that induce cell death by apoptosis. This results in what look like dents and lesions in the host plant. These are brown, concentric circles with a yellow tinge at the circumference, usually about 0.5-2.5cm in diameter. Necrosis can generally be observed within 48 hours of infection. The spores can reside on the external seed coat of infected seeds, but the mycelium can also penetrate under the seed coat, where it has the ability to remain viable for several years. Occasionally, it can even penetrate the embryo tissue. The primary mode of transmission is through contaminated seed. Also, the infection is not limited to specific areas of the host plant; it can spread all over and even cause damping off of the seedlings at a relatively early stage. It also affects the host species at various developmental stages. As mentioned above, seedlings exhibit dark stem lesions followed by damping off. Velvety, black spots, resembling soot, can be observed on older plants. Pathogenesis is affected by factors such as: temperature, humidity, pH, reactive oxidation species, host defense molecules.
Genes
Out of the 10,688 predicted genes from the A. brassicicola genome, 139 encode small secretion proteins that may be involved in pathogenesis, 76 encode lipases and 249 encode glycosyl hydrolases that are important for polysaccharide digestion, potentially damaging host cells. In contrast, mutations in genes such as AbHog1, AbNPS2, and AbSlt2 affect cell wall integrity and make the fungus more susceptible to host defenses. Currently, research is being done to identify the gene(s) responsible for encoding a transcription factor, Bdtf1, important for the detoxification of host metabolites.
Biochemistry
The most common toxin studied for A. brassicicola is the AB toxin, said to be connected to the virulence, pathogenicity and host range for the fungus. It is most likely produced during conidial germination and probably linked to the ability of the fungus to infect and colonize Brassica leaves However, recent studies have explored new potential metabolites. For example, this fungus also produces histone deacetylase inhibitors, but these do not have a significant impact on lesion size. Some studies show only a 10% reduction in virulence. Furthermore, alternariol and tenuazonic acid seem to affect mitochondrial-mediated apoptosis pathways and protein synthesis respectively (in the host cell), but again, not to a significant degree. Some cytokines have been linked with the discolouration associated with A. brassicicola infection. Cell wall degrading enzymes like lipases and cutinases are also linked to its pathogenicity, but more evidence of their efficacy is required. One important transcription factor is AbPf2. It regulates 6 of the 139 genes encoding small secretion proteins and may have a role in pathogenesis, specifically cellulose digestion.
Treatments
In order to protect their crops, many individuals pre-treat their seeds with fungicides. The most widespread active ingredients in these fungicides are Iprodione and Strobilurins. In 1995, it was reported that Iprodione most likely acts by mutating two histidine residues in the target site of enzymes. Ultimately, it inhibits germ tube growth. However, the ubiquitous use of fungicides has resulted in the fungus growing increasingly resistant. Thus, different, non-chemical approaches have been explored. People have tried to develop resistant Brassicaceae crops through breeding. However, this has proved challenging due to the difficulty of transferring genes from wild-type to cultivated strains, resulting in genetic bottlenecks. It is further complicated by the probability that resistance seems to be a polygenic trait. There are also some Brassica plants that have developed resistance to the pathogen naturally. High phenolase activity, high leaf sugar, and thicker wax layers reduce water-borne spore germination. It has been shown that the presence of camalexin in the host plant helps it to disrupt pathogen development. For example, an Arabidopsis mutant in the pad-3 gene that does not produce camalexin is more susceptible to infection. Varying levels show differing levels of resistance. Another suggestion put forth is crop debris management. The aim is to minimize exposure of the crop plants to spores present in the soil by using crop rotation and weed control.
Biological approaches have also been studied. One approach has been to use antagonistic fungi such as Aureobasidium pullulans & Epicoccum nigrum to subdue the effect of A. brassicicola. The plants C. fenestratum and Piper betle also show potent fungicidal activity towards A. brassicicola both in vitro and under greenhouse conditions. These levels are comparable to Iprodione. The active compound, berberine, affects cell wall integrity and ergosterol biosynthesis. Ethanol extracts from the dried roots of Solanum nigrum (black nightshade), traditionally used as herbal remedies in places ranging from the Far East to India and Mexico, show promising anti-fungal activity as well. They seem to suppress conidial germination, possibly by interfering with the AB toxin.
Economic impact
As mentioned previously, Alternaria brassicicola causes severe black spot diseases in a number of ecologically important crops. Often, it occurs in conjunction with Alternaria brassicae. However, it is the more dominant invasive species. These infections lead to a significant loss in viable seeds and produce. The resulting lesions greatly reduce available photosynthetic area, leading to wilt and plant death. Crops like infected cabbages do not last long during storage or transportation. In some cases, yield reductions can be as high as 20-50%. The lack of ability to use fungicides makes it challenging to sustain organic crops in a cost-effective way.
References
brassicicola
Fungal plant pathogens and diseases
Eudicot diseases
Fungi described in 1947
Fungus species | Alternaria brassicicola | Biology | 2,045 |
74,532,850 | https://en.wikipedia.org/wiki/Russula%20amoenolens | Russula amoenolens, also known by its common name camembert brittlegill, is a member of the genus Russula. The species has a greyish-brown cap, with clear scoring along the edge. While inedible, the mushroom is known for its distinctive smell like camembert cheese. The mushroom often appears under oak trees from summer to autumn.
Taxonomy
The species was first described by French mycologist Henri Romagnesi in 1952.
Distribution
The species is primarily found in Europe, but has also been reported in the United States, Costa Rica, Morocco and New Zealand.
See also
List of Russula species
References
amoenolens
Fungus species | Russula amoenolens | Biology | 138 |
3,014,568 | https://en.wikipedia.org/wiki/Winter%20Hexagon | The Winter Hexagon or Winter Circle/Oval is an asterism appearing to be in the form of a hexagon with vertices at Rigel, Aldebaran, Capella, Pollux, Procyon, and Sirius. It is mostly upon the Northern Hemisphere's celestial sphere. On most locations on Earth (except the South Island of New Zealand and the south of Chile and Argentina and further south), this asterism is visible in the evening sky at the equator from approximately December to June, and in the morning sky from July to the end of November, while in the evenings on the northern hemisphere it is less months visible between December and June, and on the southern hemisphere less months between July and November. In the tropics and southern hemisphere, this (then called "summer hexagon") can be extended with the bright star Canopus in the south.
Smaller and more regularly shaped is the Winter Triangle, an approximately equilateral triangle that shares two vertices (Sirius and Procyon) with the larger asterism. The third vertex is Betelgeuse, which lies near the center of the hexagon. These three stars are three of the ten brightest objects, as viewed from Earth, outside the Solar System. Betelgeuse is also particularly easy to locate, being a shoulder of Orion, which assists stargazers in finding the triangle. After that, the larger hexagon, if none of its stars have set or not risen (or on the cusp of those daily events), may quite easily be found in cloudless skies.
Several of the stars in the hexagon can also be found independently by following various lines traced through the stars in Orion.
The stars in the hexagon are parts of six constellations. Counter-clockwise around the hexagon, starting with Rigel, these are Orion, Taurus, Auriga, Gemini, Canis Minor, and Canis Major.
See also
Spring Triangle
Summer Triangle
Northern Cross
References
External links
The Great Winter Hexagon
APOD 2011 January 3 Winter Hexagon Over Stagecoach Colorado
Asterisms (astronomy) | Winter Hexagon | Astronomy | 438 |
73,643,281 | https://en.wikipedia.org/wiki/Validation%20and%20verification%20%28medical%20devices%29 | Validation and verification are procedures that ensure that medical devices fulfil their intended purpose. Validation or verification is generally needed when a health facility acquires a new device to perform medical tests.
Validation or verification
The main difference between the two is that validation is focused on ensuring that the device meets the needs and requirements of its intended users and the intended use environment, whereas verification is focused on ensuring that the device meets its specified design requirements.
For instance, a regulatory agency (such as CE or FDA) may ensure that a product has been validated for general use before approval. An individual laboratory that introduces such an approved medical device may then not need to perform their own validation, but generally still need to perform verification to ensure that the device works correctly.
Workflow
Standards
Standards for validation and verification of medical laboratories are outlined in the international standard ISO 15189, in addition to national and regional regulations.
As per United States federal regulations, the following analytical tests need to be done by a medical laboratory that introduces a new testing device:
To establish a reference range, the Clinical and Laboratory Standards Institute (CLSI) recommends testing at least 120 patient samples. In contrast, for the verification of a reference range, it is recommended to use a total of 40 samples, 20 from healthy men and 20 from healthy women, and the results should be compared to the published reference range. The results should be evenly spread throughout the published reference range rather than clustered at one end. The published reference range can be accepted for use if 95% of the results fall within it. Otherwise, the laboratory needs to establish its own reference range.
See also
Validation (drug manufacture)
References
Quality management
Product testing
Systems engineering | Validation and verification (medical devices) | Engineering | 338 |
50,104,345 | https://en.wikipedia.org/wiki/Emanuel%20Kaspar | Emanuel Kaspar (born 25 July 1915 in Berlin, died 6 February 1971 in Kamen) was a German organic chemist. He is known for inventing and patenting the synthesis of clocortolone with Rainer Philippson in 1973. The original assignee of the patent was Schering AG. Kaspar held a PhD (Dr.rer.nat.) and worked as a researcher at Schering AG. He was also a co-inventor of several other patents held by Schering. He married fellow chemist Elisabeth Barbara Hilda Vogt von Hunoltstein (born 1923) in Wiesbaden in 1948.
References
20th-century German chemists
German organic chemists
Schering people
Scientists from Berlin
1915 births
1971 deaths | Emanuel Kaspar | Chemistry | 149 |
33,551,213 | https://en.wikipedia.org/wiki/V14%20engine | A V14 engine is a V engine with 14 cylinders mounted on the crankcase in two banks of seven cylinders. It is a very rare layout, used almost exclusively on large medium-speed diesel engines used for power generation and marine propulsion.
Marine use
MAN B&W offers V14 layout for its 32/40, 32/44CR, 48/60CR, 49/60DF, and 51/60DF engines, with outputs ranging from . MAN V14 engines have been installed on cruise ships such as the Explorer Dream and Norwegian Spirit, both of which have 14V48/60 engines producing each, and on some cargo vessels. However, other major manufacturers do not normally offer medium-speed engines in the V14 configuration.
Wärtsilä has only recently begun to offer V14 versions of its latest engine models, the 31, 46F, and 46DF.
In the past, V14 engines have also been offered by other manufacturers. Between 1982 and 1987 nineteen SA-15 arctic cargo ships were built with two 14-cylinder Wärtsilä-Sulzer 14ZV40/48 engines producing . SEMT Pielstick, nowadays part of MAN B&W, also produced four-stroke engines with 14 cylinders in V-configuration (14PC2 and 14PC4). They were used for example on RFA Bayleaf, a Leaf-class support tanker of the Royal Fleet Auxiliary.
See also
Straight-14 engine
References
14
Piston engine configurations
14-cylinder engines | V14 engine | Engineering | 301 |
18,184 | https://en.wikipedia.org/wiki/Lizard | Lizard is the common name used for all squamate reptiles other than snakes (and to a lesser extent amphisbaenians), encompassing over 7,000 species, ranging across all continents except Antarctica, as well as most oceanic island chains. The grouping is paraphyletic as some lizards are more closely related to snakes than they are to other lizards. Lizards range in size from chameleons and geckos a few centimeters long to the 3-meter-long Komodo dragon.
Most lizards are quadrupedal, running with a strong side-to-side motion. Some lineages (known as "legless lizards") have secondarily lost their legs, and have long snake-like bodies. Some lizards, such as the forest-dwelling Draco, are able to glide. They are often territorial, the males fighting off other males and signalling, often with bright colours, to attract mates and to intimidate rivals. Lizards are mainly carnivorous, often being sit-and-wait predators; many smaller species eat insects, while the Komodo eats mammals as big as water buffalo.
Lizards make use of a variety of antipredator adaptations, including venom, camouflage, reflex bleeding, and the ability to sacrifice and regrow their tails.
Anatomy
Largest and smallest
The adult length of species within the suborder ranges from a few centimeters for chameleons such as Brookesia micra and geckos such as Sphaerodactylus ariasae to nearly in the case of the largest living varanid lizard, the Komodo dragon. Most lizards are fairly small animals.
Distinguishing features
Lizards typically have rounded torsos, elevated heads on short necks, four limbs and long tails, although some are legless. Lizards and snakes share a movable quadrate bone, distinguishing them from the rhynchocephalians, which have more rigid diapsid skulls. Some lizards such as chameleons have prehensile tails, assisting them in climbing among vegetation.
As in other reptiles, the skin of lizards is covered in overlapping scales made of keratin. This provides protection from the environment and reduces water loss through evaporation. This adaptation enables lizards to thrive in some of the driest deserts on earth. The skin is tough and leathery, and is shed (sloughed) as the animal grows. Unlike snakes which shed the skin in a single piece, lizards slough their skin in several pieces. The scales may be modified into spines for display or protection, and some species have bone osteoderms underneath the scales.
The dentitions of lizards reflect their wide range of diets, including carnivorous, insectivorous, omnivorous, herbivorous, nectivorous, and molluscivorous. Species typically have uniform teeth suited to their diet, but several species have variable teeth, such as cutting teeth in the front of the jaws and crushing teeth in the rear. Most species are pleurodont, though agamids and chameleons are acrodont.
The tongue can be extended outside the mouth, and is often long. In the beaded lizards, whiptails and monitor lizards, the tongue is forked and used mainly or exclusively to sense the environment, continually flicking out to sample the environment, and back to transfer molecules to the vomeronasal organ responsible for chemosensation, analogous to but different from smell or taste. In geckos, the tongue is used to lick the eyes clean: they have no eyelids. Chameleons have very long sticky tongues which can be extended rapidly to catch their insect prey.
Three lineages, the geckos, anoles, and chameleons, have modified the scales under their toes to form adhesive pads, highly prominent in the first two groups. The pads are composed of millions of tiny setae (hair-like structures) which fit closely to the substrate to adhere using van der Waals forces; no liquid adhesive is needed. In addition, the toes of chameleons are divided into two opposed groups on each foot (zygodactyly), enabling them to perch on branches as birds do.
Physiology
Locomotion
Aside from legless lizards, most lizards are quadrupedal and move using gaits with alternating movement of the right and left limbs with substantial body bending. This body bending prevents significant respiration during movement, limiting their endurance, in a mechanism called Carrier's constraint. Several species can run bipedally, and a few can prop themselves up on their hindlimbs and tail while stationary. Several small species such as those in the genus Draco can glide: some can attain a distance of , losing in height. Some species, like geckos and chameleons, adhere to vertical surfaces including glass and ceilings. Some species, like the common basilisk, can run across water.
Senses
Lizards make use of their senses of sight, touch, olfaction and hearing like other vertebrates. The balance of these varies with the habitat of different species; for instance, skinks that live largely covered by loose soil rely heavily on olfaction and touch, while geckos depend largely on acute vision for their ability to hunt and to evaluate the distance to their prey before striking. Monitor lizards have acute vision, hearing, and olfactory senses. Some lizards make unusual use of their sense organs: chameleons can steer their eyes in different directions, sometimes providing non-overlapping fields of view, such as forwards and backwards at once. Lizards lack external ears, having instead a circular opening in which the tympanic membrane (eardrum) can be seen. Many species rely on hearing for early warning of predators, and flee at the slightest sound.
As in snakes and many mammals, all lizards have a specialised olfactory system, the vomeronasal organ, used to detect pheromones. Monitor lizards transfer scent from the tip of their tongue to the organ; the tongue is used only for this information-gathering purpose, and is not involved in manipulating food.
Some lizards, particularly iguanas, have retained a photosensory organ on the top of their heads called the parietal eye, a basal ("primitive") feature also present in the tuatara. This "eye" has only a rudimentary retina and lens and cannot form images, but is sensitive to changes in light and dark and can detect movement. This helps them detect predators stalking it from above.
Venom
Until 2006 it was thought that the Gila monster and the Mexican beaded lizard were the only venomous lizards. However, several species of monitor lizards, including the Komodo dragon, produce powerful venom in their oral glands. Lace monitor venom, for instance, causes swift loss of consciousness and extensive bleeding through its pharmacological effects, both lowering blood pressure and preventing blood clotting. Nine classes of toxin known from snakes are produced by lizards. The range of actions provides the potential for new medicinal drugs based on lizard venom proteins.
Genes associated with venom toxins have been found in the salivary glands of a wide range of lizards, including species traditionally thought of as non-venomous, such as iguanas and bearded dragons. This suggests that these genes evolved in the common ancestor of lizards and snakes, some 200 million years ago (forming a single clade, the Toxicofera). However, most of these putative venom genes were "housekeeping genes" found in all cells and tissues, including skin and cloacal scent glands. The genes in question may thus be evolutionary precursors of venom genes.
Respiration
Recent studies (2013 and 2014) on the lung anatomy of the savannah monitor and green iguana found them to have a unidirectional airflow system, which involves the air moving in a loop through the lungs when breathing. This was previously thought to only exist in the archosaurs (crocodilians and birds). This may be evidence that unidirectional airflow is an ancestral trait in diapsids.
Reproduction and life cycle
As with all amniotes, lizards rely on internal fertilisation and copulation involves the male inserting one of his hemipenes into the female's cloaca. Female lizards also have
hemiclitorises, a doubled
clitoris. The majority of species are oviparous (egg laying). The female deposits the eggs in a protective structure like a nest or crevice or simply on the ground. Depending on the species, clutch size can vary from 4–5 percent of the females body weight to 40–50 percent and clutches range from one or a few large eggs to dozens of small ones.
In most lizards, the eggs have leathery shells to allow for the exchange of water, although more arid-living species have calcified shells to retain water. Inside the eggs, the embryos use nutrients from the yolk. Parental care is uncommon and the female usually abandons the eggs after laying them. Brooding and protection of eggs do occur in some species. The female prairie skink uses respiratory water loss to maintain the humidity of the eggs which facilitates embryonic development. In lace monitors, the young hatch close to 300 days, and the female returns to help them escape the termite mound where the eggs were laid.
Around 20 percent of lizard species reproduce via viviparity (live birth). This is particularly common in Anguimorphs. Viviparous species give birth to relatively developed young which look like miniature adults. Embryos are nourished via a placenta-like structure. A minority of lizards have parthenogenesis (reproduction from unfertilised eggs). These species consist of all females who reproduce asexually with no need for males. This is known to occur in various species of whiptail lizards. Parthenogenesis was also recorded in species that normally reproduce sexually. A captive female Komodo dragon produced a clutch of eggs, despite being separated from males for over two years.
Sex determination in lizards can be temperature-dependent. The temperature of the eggs' micro-environment can determine the sex of the hatched young: low temperature incubation produces more females while higher temperatures produce more males. However, some lizards have sex chromosomes and both male heterogamety (XY and XXY) and female heterogamety (ZW) occur.
Aging
A significant component of aging in the painted dragon lizard Ctenophorus pictus is fading breeding colors. By manipulating superoxide levels (using a superoxide dismutase mimetic) it was shown that this fading coloration is likely due to gradual loss with lizard age of an innate capacity for antioxidation due to increasing DNA damage.
Behaviour
Diurnality and thermoregulation
The majority of lizard species are active during the day, though some are active at night, notably geckos. As ectotherms, lizards have a limited ability to regulate their body temperature, and must seek out and bask in sunlight to gain enough heat to become fully active. Thermoregulation behavior can be beneficial in the short term for lizards as it allows the ability to buffer environmental variation and endure climate warming.
In high altitudes, the Podarcis hispaniscus responds to higher temperature with a darker dorsal coloration to prevent UV-radiation and background matching. Their thermoregulatory mechanisms also allow the lizard to maintain their ideal body temperature for optimal mobility.
Territoriality
Most social interactions among lizards are between breeding individuals. Territoriality is common and is correlated with species that use sit-and-wait hunting strategies. Males establish and maintain territories that contain resources that attract females and which they defend from other males. Important resources include basking, feeding, and nesting sites as well as refuges from predators. The habitat of a species affects the structure of territories, for example, rock lizards have territories atop rocky outcrops. Some species may aggregate in groups, enhancing vigilance and lessening the risk of predation for individuals, particularly for juveniles. Agonistic behaviour typically occurs between sexually mature males over territory or mates and may involve displays, posturing, chasing, grappling and biting.
Communication
Lizards signal both to attract mates and to intimidate rivals. Visual displays include body postures and inflation, push-ups, bright colours, mouth gapings and tail waggings. Male anoles and iguanas have dewlaps or skin flaps which come in various sizes, colours and patterns and the expansion of the dewlap as well as head-bobs and body movements add to the visual signals. Some species have deep blue dewlaps and communicate with ultraviolet signals. Blue-tongued skinks will flash their tongues as a threat display. Chameleons are known to change their complex colour patterns when communicating, particularly during agonistic encounters. They tend to show brighter colours when displaying aggression and darker colours when they submit or "give up".
Several gecko species are brightly coloured; some species tilt their bodies to display their coloration. In certain species, brightly coloured males turn dull when not in the presence of rivals or females. While it is usually males that display, in some species females also use such communication. In the bronze anole, head-bobs are a common form of communication among females, the speed and frequency varying with age and territorial status. Chemical cues or pheromones are also important in communication. Males typically direct signals at rivals, while females direct them at potential mates. Lizards may be able to recognise individuals of the same species by their scent.
Acoustic communication is less common in lizards. Hissing, a typical reptilian sound, is mostly produced by larger species as part of a threat display, accompanying gaping jaws. Some groups, particularly geckos, snake-lizards, and some iguanids, can produce more complex sounds and vocal apparatuses have independently evolved in different groups. These sounds are used for courtship, territorial defense and in distress, and include clicks, squeaks, barks and growls. The mating call of the male tokay gecko is heard as "tokay-tokay!". Tactile communication involves individuals rubbing against each other, either in courtship or in aggression. Some chameleon species communicate with one another by vibrating the substrate that they are standing on, such as a tree branch or leaf.
Ecology
Distribution and habitat
Lizards are found worldwide, excluding the far north and Antarctica, and some islands. They can be found in elevations from sea level to . They prefer warmer, tropical climates but are adaptable and can live in all but the most extreme environments. Lizards also exploit a number of habitats; most primarily live on the ground, but others may live in rocks, on trees, underground and even in water. The marine iguana is adapted for life in the sea.
Diet
The majority of lizard species are predatory and the most common prey items are small, terrestrial invertebrates, particularly insects. Many species are sit-and-wait predators though others may be more active foragers. Chameleons prey on numerous insect species, such as beetles, grasshoppers and winged termites as well as spiders. They rely on persistence and ambush to capture these prey. An individual perches on a branch and stays perfectly still, with only its eyes moving. When an insect lands, the chameleon focuses its eyes on the target and slowly moves toward it before projecting its long sticky tongue which, when hauled back, brings the attached prey with it. Geckos feed on crickets, beetles, termites and moths.
Termites are an important part of the diets of some species of Autarchoglossa, since, as social insects, they can be found in large numbers in one spot. Ants may form a prominent part of the diet of some lizards, particularly among the lacertas. Horned lizards are also well known for specializing on ants. Due to their small size and indigestible chitin, ants must be consumed in large amounts, and ant-eating lizards have larger stomachs than even herbivorous ones. Species of skink and alligator lizards eat snails and their power jaws and molar-like teeth are adapted for breaking the shells.
Larger species, such as monitor lizards, can feed on larger prey including fish, frogs, birds, mammals and other reptiles. Prey may be swallowed whole and torn into smaller pieces. Both bird and reptile eggs may also be consumed as well. Gila monsters and beaded lizards climb trees to reach both the eggs and young of birds. Despite being venomous, these species rely on their strong jaws to kill prey. Mammalian prey typically consists of rodents and leporids; the Komodo dragon can kill prey as large as water buffalo. Dragons are prolific scavengers, and a single decaying carcass can attract several from away. A dragon is capable of consuming a carcass in 17 minutes.
Around 2 percent of lizard species, including many iguanids, are herbivores. Adults of these species eat plant parts like flowers, leaves, stems and fruit, while juveniles eat more insects. Plant parts can be hard to digest, and, as they get closer to adulthood, juvenile iguanas eat faeces from adults to acquire the microflora necessary for their transition to a plant-based diet. Perhaps the most herbivorous species is the marine iguana which dives to forage for algae, kelp and other marine plants. Some non-herbivorous species supplement their insect diet with fruit, which is easily digested.
Antipredator adaptations
Lizards have a variety of antipredator adaptations, including running and climbing, venom, camouflage, tail autotomy, and reflex bleeding.
Camouflage
Lizards exploit a variety of different camouflage methods. Many lizards are disruptively patterned. In some species, such as Aegean wall lizards, individuals vary in colour, and select rocks which best match their own colour to minimise the risk of being detected by predators. The Moorish gecko is able to change colour for camouflage: when a light-coloured gecko is placed on a dark surface, it darkens within an hour to match the environment. The chameleons in general use their ability to change their coloration for signalling rather than camouflage, but some species such as Smith's dwarf chameleon do use active colour change for camouflage purposes.
The flat-tail horned lizard's body is coloured like its desert background, and is flattened and fringed with white scales to minimise its shadow.
Autotomy
Many lizards, including geckos and skinks, are capable of shedding their tails (autotomy). The detached tail, sometimes brilliantly coloured, continues to writhe after detaching, distracting the predator's attention from the fleeing prey. Lizards partially regenerate their tails over a period of weeks. Some 326 genes are involved in regenerating lizard tails. The fish-scale gecko Geckolepis megalepis sheds patches of skin and scales if grabbed.
Escape, playing dead, reflex bleeding
Many lizards attempt to escape from danger by running to a place of safety; for example, wall lizards can run up walls and hide in holes or cracks. Horned lizards adopt differing defences for specific predators. They may play dead to deceive a predator that has caught them; attempt to outrun the rattlesnake, which does not pursue prey; but stay still, relying on their cryptic coloration, for Masticophis whip snakes which can catch even swift prey. If caught, some species such as the greater short-horned lizard puff themselves up, making their bodies hard for a narrow-mouthed predator like a whip snake to swallow. Finally, horned lizards can squirt blood at cat and dog predators from a pouch beneath its eyes, to a distance of about ; the blood tastes foul to these attackers.
Evolution
Fossil history
The closest living relatives of lizards are rhynchocephalians, a once diverse order of reptiles, of which is there is now only one living species, the tuatara of New Zealand. Some reptiles from the Early and Middle Triassic, like Sophineta and Megachirella, are suggested to be stem-group squamates, more closely related to modern lizards than rhynchocephalians, however, their position is disputed, with some studies recovering them as less closely related to squamates than rhynchocephalians are. The oldest undisputed lizards date to the Middle Jurassic, from remains found In Europe, Asia and North Africa. Lizard morphological and ecological diversity substantially increased over the course of the Cretaceous. In the Palaeogene, lizard body sizes in North America peaked during the middle of the period.
Mosasaurs likely evolved from an extinct group of aquatic lizards known as aigialosaurs in the Early Cretaceous. Dolichosauridae is a family of Late Cretaceous aquatic varanoid lizards closely related to the mosasaurs.
Phylogeny
External
The position of the lizards and other Squamata among the reptiles was studied using fossil evidence by Rainer Schoch and Hans-Dieter Sues in 2015. Lizards form about 60% of the extant non-avian reptiles.
Internal
Both the snakes and the Amphisbaenia (worm lizards) are clades deep within the Squamata (the smallest clade that contains all the lizards), so "lizard" is paraphyletic.
The cladogram is based on genomic analysis by Wiens and colleagues in 2012 and 2016. Excluded taxa are shown in upper case on the cladogram.
Taxonomy
In the 13th century, lizards were recognized in Europe as part of a broad category of reptiles that consisted of a miscellany of egg-laying creatures, including "snakes, various fantastic monsters, […], assorted amphibians, and worms", as recorded by Vincent of Beauvais in his Mirror of Nature. The seventeenth century saw changes in this loose description. The name Sauria was coined by James Macartney (1802); it was the Latinisation of the French name Sauriens, coined by Alexandre Brongniart (1800) for an order of reptiles in the classification proposed by the author, containing lizards and crocodilians, later discovered not to be each other's closest relatives. Later authors used the term "Sauria" in a more restricted sense, i.e. as a synonym of Lacertilia, a suborder of Squamata that includes all lizards but excludes snakes. This classification is rarely used today because Sauria so-defined is a paraphyletic group. It was defined as a clade by Jacques Gauthier, Arnold G. Kluge and Timothy Rowe (1988) as the group containing the most recent common ancestor of archosaurs and lepidosaurs (the groups containing crocodiles and lizards, as per Mcartney's original definition) and all its descendants. A different definition was formulated by Michael deBraga and Olivier Rieppel (1997), who defined Sauria as the clade containing the most recent common ancestor of Choristodera, Archosauromorpha, Lepidosauromorpha and all their descendants. However, these uses have not gained wide acceptance among specialists.
Suborder Lacertilia (Sauria) – (lizards)
Family †Bavarisauridae
Family †Eichstaettisauridae
Infraorder Iguanomorpha
Family †Arretosauridae
Family †Euposauridae
Family Corytophanidae (casquehead lizards)
Family Iguanidae (iguanas and spinytail iguanas)
Family Phrynosomatidae (earless, spiny, tree, side-blotched and horned lizards)
Family Polychrotidae (anoles)
Family Leiosauridae (see Polychrotinae)
Family Tropiduridae (neotropical ground lizards)
Family Liolaemidae (see Tropidurinae)
Family Leiocephalidae (see Tropidurinae)
Family Crotaphytidae (collared and leopard lizards)
Family Opluridae (Madagascar iguanids)
Family Hoplocercidae (wood lizards, clubtails)
Family †Priscagamidae
Family †Isodontosauridae
Family Agamidae (agamas, frilled lizards)
Family Chamaeleonidae (chameleons)
Infraorder Gekkota
Family Gekkonidae (geckos)
Family Pygopodidae (legless geckos)
Family Dibamidae (blind lizards)
Infraorder Scincomorpha
Family †Paramacellodidae
Family †Slavoiidae
Family Scincidae (skinks)
Family Cordylidae (spinytail lizards)
Family Gerrhosauridae (plated lizards)
Family Xantusiidae (night lizards)
Family Lacertidae (wall lizards or true lizards)
Family †Mongolochamopidae
Family †Adamisauridae
Family Teiidae (tegus and whiptails)
Family Gymnophthalmidae (spectacled lizards)
Infraorder Diploglossa
Family Anguidae (slowworms, glass lizards)
Family Anniellidae (American legless lizards)
Family Xenosauridae (knob-scaled lizards)
Infraorder Platynota (Varanoidea)
Family Varanidae (monitor lizards)
Family Lanthanotidae (earless monitor lizards)
Family Helodermatidae (Gila monsters and beaded lizards)
Family †Mosasauridae (marine lizards)
Convergence
Lizards have frequently evolved convergently, with multiple groups independently developing similar morphology and ecological niches. Anolis ecomorphs have become a model system in evolutionary biology for studying convergence. Limbs have been lost or reduced independently over two dozen times across lizard evolution, including in the Anniellidae, Anguidae, Cordylidae, Dibamidae, Gymnophthalmidae, Pygopodidae, and Scincidae; snakes are just the most famous and species-rich group of Squamata to have followed this path.
Relationship with humans
Interactions and uses by humans
Most lizard species are harmless to humans. Only the largest lizard species, the Komodo dragon, which reaches in length and weighs up to , has been known to stalk, attack, and, on occasion, kill humans. An eight-year-old Indonesian boy died from blood loss after an attack in 2007.
Numerous species of lizard are kept as pets, including bearded dragons, iguanas, anoles, and geckos (such as the popular leopard gecko).Monitor lizards such as the savannah monitor and tegus such as the Argentine tegu and red tegu are also kept.
Green iguanas are eaten in Central America, where they are sometimes referred to as "chicken of the tree" after their habit of resting in trees and their supposedly chicken-like taste, while spiny-tailed lizards are eaten in Africa. In North Africa, Uromastyx species are considered dhaab or 'fish of the desert' and eaten by nomadic tribes.Lizards such as the Gila monster produce toxins with medical applications. Gila toxin reduces plasma glucose; the substance is now synthesized for use in the anti-diabetes drug exenatide (Byetta). Another toxin from Gila monster saliva has been studied for use as an anti-Alzheimer's drug.
In culture
Lizards appear in myths and folktales around the world. In Australian Aboriginal mythology, Tarrotarro, the lizard god, split the human race into male and female, and gave people the ability to express themselves in art. A lizard king named Mo'o features in Hawaii and other cultures in Polynesia. In the Amazon, the lizard is the king of beasts, while among the Bantu of Africa, the god UNkulunkulu sent a chameleon to tell humans they would live forever, but the chameleon was held up, and another lizard brought a different message, that the time of humanity was limited. A popular legend in Maharashtra tells the tale of how a common Indian monitor, with ropes attached, was used to scale the walls of the fort in the Battle of Sinhagad. In the Bhojpuri speaking region of India and Nepal, there is a belief among children that, on touching skink's tail three (or five) time with the shortest finger gives money.
Lizards in many cultures share the symbolism of snakes, especially as an emblem of resurrection. This may have derived from their regular molting. The motif of lizards on Christian candle holders probably alludes to the same symbolism. According to Jack Tresidder, in Egypt and the Classical world, they were beneficial emblems, linked with wisdom. In African, Aboriginal and Melanesian folklore they are linked to cultural heroes or ancestral figures.
Notes
References
General sources
Further reading
External links
Paraphyletic groups
Extant Hettangian first appearances | Lizard | Biology | 5,963 |
39,361,626 | https://en.wikipedia.org/wiki/George%20S.%20Whitby | George Stafford Whitby (1887–1972) was the head of the University of Akron rubber laboratory and for many years was the only person in the United States who taught rubber chemistry. Whitby received the Charles Goodyear Medal in 1954 and in 1972, he was inducted into the International Rubber Science Hall of Fame. In 1986 the Rubber Division established the George Stafford Whitby Award in his honor.
Personal
Whitby was born in Hull, England on May 23, 1887. He immigrated to the United States in 1942, becoming an American citizen in 1946. He died at Delray Beach, Florida on January 10, 1972.
Education
Whitby received the BS degree in 1907 from the Royal College of Science in London. He obtained MS and PhD degrees from McGill University in 1918 and 1920.
Career
Upon completing his undergraduate education in 1907, Whitby served as a chemist for the Societe Financiere des Caoutchoucs in Malaysia. After completing his graduate education, he accepted an appointment as a full professor at McGill University in 1923. In 1929, he accepted a position as director of the chemical division of the National Chemical Research Council of Canada. He joined the University of Akron faculty in 1942, and retired in 1954. His most cited work was an investigation of emulsifier-free polymerization in aqueous media.
Awards
1928 - Colwyn medal
1954 - Charles Goodyear Medal
1972 - Inducted into the International Rubber Science Hall of Fame.
References
External links
Oral history on George S. Whitby
1887 births
1972 deaths
Polymer scientists and engineers
U.S. Synthetic Rubber Program
University of Akron faculty
British emigrants to the United States
20th-century American chemists | George S. Whitby | Chemistry,Materials_science | 340 |
744,949 | https://en.wikipedia.org/wiki/C21H23NO5 | {{DISPLAYTITLE:C21H23NO5}}
The molecular formula C21H23NO5 (molar mass: 369.41 g/mol, exact mass: 369.1576 u) may refer to:
Allocryptopine
Cryptopine
Heroin, or diacetylmorphine
Molecular formulas | C21H23NO5 | Physics,Chemistry | 73 |
1,057,064 | https://en.wikipedia.org/wiki/Novo%20Nordisk | Novo Nordisk A/S is a Danish multinational pharmaceutical company headquartered in Bagsværd, Denmark with production facilities in nine countries and affiliates or offices in five countries. Novo Nordisk is controlled by majority shareholder Novo Holdings A/S which holds approximately 28% of its shares and a majority (77%) of its voting shares.
Novo Nordisk manufactures and markets pharmaceutical products and services, specifically diabetes care medications and devices. Its main product is the drug semaglutide, used to treat diabetes under the brand names Ozempic and Rybelsus and obesity under the brand name Wegovy. Novo Nordisk is also involved with hemostasis management, growth hormone therapy, and hormone replacement therapy. The company makes several drugs under various brand names, including Levemir, Tresiba, NovoLog, Novolin R, NovoSeven, NovoEight, and Victoza.
Novo Nordisk employs more than 48,000 people globally, and markets its products in 168 countries. The corporation was created in 1989, through a merger of two Danish companies, which date back to the 1920s. The Novo Nordisk logo is the Apis bull, one of the sacred animals of ancient Egypt, denoted by the hieroglyph 𓃒. Novo Nordisk is a full member of the European Federation of Pharmaceutical Industries and Associations (EFPIA).
The company was ranked 25th among Fortune's 100 Best Companies to Work For in 2010, and subsequently ranked 72nd in 2014 and 73rd in 2017. In January 2012, Novo Nordisk was named the most sustainable company in the world by the business magazine Corporate Knights, while spin-off company Novozymes was named fourth. It is a leader in the FTSE4Good Index, and the only European company in the top ten. Novo Nordisk is the largest pharmaceutical company in Denmark. Novo Nordisk's market capitalization exceeded the GDP of Denmark's domestic economy in 2023, and it is the highest valued company in Europe. Revenue in 2023 was 33.724 billion USD.
History
1923
Nordisk Insulinlaboratorium commercialises the production of insulin.
1982–1994
The company established its presence in the United States in 1982 and Canada in 1984.
In 1986, Novo Industri A/S acquired the Ferrosan Group, now named as "Novo Nordisk Pharmatech A/S."
In 1989, Novo Industri A/S (Novo Terapeutisk Laboratorium) and Nordisk Gentofte A/S (Nordisk Insulinlaboratorium) merged to become Novo Nordisk A/S, the world's largest producer of insulin with headquarters in Bagsværd, Copenhagen.
In 1991, Novo Nordisk Engineering (now NNE A/S) demerged after working as in-house consultants at Novo Nordisk for years, to provide standard engineering services (end-to-end engineering) to pharma manufacturing companies.
In 1994, Novo Nordisk's existing information technology units was spun out as NNIT A/S. The company was converted into a wholly owned aktieselskab in 2004 In March 2015, NNIT was floated on the Nasdaq Nordic.
2000–2018
Novo's enzymes business, Novozymes A/S, was spun-out in 2000.
Novo acquired Xellia for $700 million in 2013.
The same year, Novo Nordisk USA moved into new headquarters offices in Plainsboro Township, New Jersey, by way of extensively renovating abandoned premises. This action served to consolidate several facilities that the company had previously had in Plainsboro.
In 2015, the company announced it would collaborate with Ablynx, using its nanobody technology to develop at least one new drug candidate.
In January 2018, Reuters reported that Novo had offered to acquire Ablynx for $3.1 billion - having made an unreported offer in mid-December for the company. However, the Ablynx board rejected this offer the same day, explaining that the price undervalued the business. Ultimately Novo lost out to Sanofi who bid $4.8 billion. Later, in the same year, the company announced it would acquire Ziylo for around $800 million.
2020–present
In March 2020, Novo volunteers started testing samples for SARS-CoV-2 with RT-qPCR equipment in the ongoing coronavirus pandemic to increase available test capacity. In June, the business announced it would acquire AstraZeneca's spin-off Corvidia Therapeutics for an initial sum of $725 million (up to a performance-related maximum of $2.1 billion), boosting its presence in cardiovascular diseases. In November, the company announced it would acquire Emisphere Technologies for $1.8 billion, gaining control of a pill-based treatment for diabetes. In December, Novo announced it would acquire Emisphere Technologies for $1.35 billion.
In November 2021, Novo announced it would acquire Dicerna Pharmaceuticals and its RNAi therapeutics, for $3.3 billion ($38.25 per share).
In September 2022, Novo agreed to acquire Forma Therapeutics for $1.1 billion with the intent to expand its sickle cell disease and rare blood disorders portfolio.
By 2022 the popularity of Novo's Wegovy and Ozempic for weight loss was so great as to significantly increase growth of the entire economy of Denmark. Two-thirds of Denmark's overall economic growth in 2022 was attributed to the pharmaceutical industry.
The company's profits increased by 45% year over year in the first half of 2023. Most of the growth occurred from its weight loss drugs, Wegovy and Ozempic, which accounted for 55% of the company's 2023 revenue.
In August 2023, Novo agreed to acquire the Montreal-headquartered pharmaceutical company, Inversago Pharma for $1 billion and Embark Biotech for up to $500 million. In October 2023, the company announced it would acquire ocedurenone—an experimental drug for uncontrolled hypertension and potentially beneficial in treating cardiovascular and kidney diseases—from KBP Biosciences for $1.3 billion.
In November 2023, Novo Nordisk announced investment of €2.1 billion in a French production facility to increase the production capacity and manufacturing of its popular anti-obesity medication.
In February 2024, parent company Novo Holdings A/S agreed to acquire Catalent for $16.5billion. On completion, Novo Nordisk said it would acquire three manufacturing facilities from its parent for $11billion to scale up production to meet the massive demand for Wegovy and Ozempic.
In March 2024 Novo Nordisk reached a $604 billion market capitalization and became the 12th most valuable company in the world. The company's stock jumped to a record high after early trial data showed positive results for its new experimental weight loss pill amycretin. The company also announced it would acquire Cardior Pharmaceuticals and its cardiovascular disease portfolio for up to $1.1 billion.
As of April 2024, the flow of cash from Novo Nordisk's weight-loss drugs was continuing to solidify its status as the most valuable company in Europe, to the point that economists were worried that Denmark might come down with Dutch disease (that is, a country that does only one thing well and nothing else). The company's market capitalization of $570 billion remained larger than the entire economy of Denmark, its $2.3 billion income tax bill for 2023 made it the largest taxpayer in the country, and its rapid growth was driving nearly all of the expansion of Denmark's economy. The company had started to move away from its traditional focus on diabetes care towards a more ambitious mission to "defeat serious chronic diseases", and towards that end, hired over 10,000 people in 2023 alone. To effectively manage the rapid expansion of its workforce while maintaining its traditional corporate culture, the Novo Nordisk Way, the company put over 400 senior executives through a leadership development program called NNX, which stands for Novo Nordisk Next.
In May 2024, the company announced it would acquire Austrian fluid management service business, Single Use Support.
In June 2024 the company announced plans to build a new production plant in Clayton, North Carolina, at a cost of $4.1 billion. It will be the company's fourth in the state of North Carolina and used for production of semaglutide products Ozempic and Wegovy. The company also announced plans to acquire US-based Catalent in to increase production supply.
As of October 2024, Novo Nordisk was the second most valuable drug company in the world by market capitalization, second only to its archrival Eli Lilly and Company.
Acquisition history
Novo Nordisk A/S
Xellia (Acq 2013)
Ziylo (Acq 2018)
Corvidia Therapeutics (Acq 2020)
Emisphere Technologies (Acq 2020)
Dicerna Pharmaceuticals (Acq 2021)
Forma Therapeutics (Acq 2022)
Inversago (Acq 2023)
Embark Biotech (Acq 2023)
Catalent (Acq 2024)
Aptuit (Acq 2012)
Micron Technologies (Acq 2014)
Pharmatek Laboratories (Acq 2016)
Cook Pharmica (Acq 2017)
Juniper Pharmaceuticals (Acq 2018)
Paragon Bioservices Inc (Acq 2019)
MaSTherCell (Acq 2020)
Rheincell Therapeutics (Acq 2021)
Bettera Holdings LLC (Acq 2021)
Toxicogenomics
Novo Nordisk is involved in government funded collaborative research projects with other industrial and governmental partners. One example in the area of non-clinical safety assessment is the InnoMed PredTox. The company is expanding its activities in joint research projects within the framework of the Innovative Medicines Initiative of European Federation of Pharmaceutical Industries and Associations and the European Commission.
Diabetic work
Novo Nordisk founded the World Diabetes foundation to save the lives of those affected by diabetes in developing countries and supported a UN (United Nations) resolution to fight diabetes, making diabetes the only other disease along with HIV / AIDS that the UN has a commitment to combat.
Diabetic treatments account for 85% of Novo Nordisk's business. Novo Nordisk works with doctors, nurses, and patients, to develop products for self-managing diabetes conditions. The DAWN (Diabetes Attitudes, Wishes and Needs) 2001 study was a global survey of the psychosocial aspects of living with diabetes. It involved over 5,000 people with diabetes and almost 4,000 care providers. This study was designed to identify barriers to optimal health and quality of life. A follow-up study completed in 2012 involved more than 15,000 people living with, or caring for, those with diabetes. In response to British findings, a National Action Plan (NAP) was developed, with a multidisciplinary steering committee, to support the delivery of individualised person-focused care in the United Kingdom. The NAP seeks to provide a holistic approach to a diabetic treatment for patients and their families.
The i3-diabetes programme is a collaboration between the King's Health Partners, one of only six Academic Health Sciences Centres (AHSCs) in England, and Novo Nordisk. The programme is a five-year collaboration designed to deliver personalised care that will lead to improved outcomes for people living with diabetes, and more efficient and effective ways of caring for people with diabetes.
Diabetic support advocacy
Novo Nordisk have sponsored the International Diabetes Federation's Unite for Diabetes campaign.
In March 2014, Novo Nordisk announced a partnership program entitled ‘Cities Changing Diabetes,’ which entails combating urban diabetes. Partnership includes University College London (UCL) and supported by Steno Diabetes Center, as well as a range of local partners including healthcare professionals, city authorities, urban planners, businesses, academics and community leaders.
A November 2014 newspaper article, suggested that a recent medical research breakthrough at Harvard University (creating insulin-producing cells from embryonic stem cells) could potentially put Novo Nordisk out of business. Dr Alan Moses, the chief medical officer of Novo Nordisk, commented that the biology of diabetes is incredibly complex, but also that Novo Nordisk's mission is to alleviate and cure diabetes. If this new medical advance "...meant the dissolution of Novo Nordisk, that'd be fine."
In September 2023, Novo Nordisk and UNICEF announced a multi-year expansion of their collaboration to address childhood overweight and obesity.
In October 2024, Novo Nordisk published a study on scientific journal Nature about a novel glucose-sensitive insulin NNC2215 that can reduce the risk of hypoglycemia in animal models.
Research and pipeline
Novo Nordisk was researching pulmonary delivery systems for diabetic medications, and in the early stages of research into autoimmune and chronic inflammatory diseases, using technologies such as translational immunology and monoclonal antibodies. In September 2014, the company announced a decision to discontinue all research in inflammatory disorders, including the discontinuation of R&D in anti-IL-20 for the treatment of rheumatoid arthritis.
In September 2018, it was reported that the company would lay off 400 administrative staff, laboratory technicians and scientists, in Denmark and China in order to concentrate research and development efforts on “transformational biological and technological innovation”.
Controversies
In 2010, Novo Nordisk breached the code of conduct for Association of the British Pharmaceutical Industry (ABPI), by failing to provide information about side-effects of Victoza and by promoting Victoza prior to being granted market authorisation.
In 2013, Novo Nordisk had to pay back billion to the Danish tax authorities due to transfer mispricing.
In March 2013, a debate emerged in which scientists questioned whether the incretin class of diabetic medications – the class to which Victoza belongs – had an increased risk of side effects in the pancreas such as pancreatitis and pancreatic cancer. It was concluded that data currently available did not confirm these concerns.
In October 2013, batches of NovoMix 30 FlexPen and Penfill insulin were recalled in some European countries as their analysis had shown that a small percentage of the products in these batches did not meet the specifications for insulin strength.
In September 2017, Novo Nordisk agreed to pay $58.7 million to end a United States Department of Justice probe into the lack of FDA disclosure to doctors about the cancer risk for their diabetic drug, Victoza.
In March 2023, Novo Nordisk was suspended from the ABPI for a period of two years, for engaging in misleading marketing practices that amounted to "bribing health professionals with inducement to prescribe". This is only the eighth time in the last 40 years that ABPI sanctioned a member organization. Consequently, the Royal College of General Practitioners and the Royal College of Physicians ended their corporate partnerships as it would be in breach of their ethical guidance. The Novo Nordisk UK General Manager, Pinder Sahota, chose to resign as President of the ABPI prior to the suspension.
On February 2, 2024 The United States Judicial Panel on Multidistrict Litigation ordered that 55 lawsuits pending in federal courts be consolidated into a multidistrict litigation. The majority of the cases were against Novo Nordisk, but some were brought against Eli Lilly. The Ozempic Lawsuits allege gastroparesis ileus and other injuries caused by GLP-1 RAS. The Case is known as MDL No. 3094 In Re: Glucagon-Like Peptide-1 Receptor Agonists (GLP-1 RAS) Products Liability Litigation. As of August 6, 2024 there were 235 active Ozempic lawsuits.
In 2024 Novo Nordisk drug pricing in the US has been a target of lawmakers, including Senator Bernie Sanders and the Senate committee Health, Education, Labor and Pensions (HELP). The committee investigation found Novo Nordisk's drug Ozempic priced for $969 per month in the US, compared to $155 in Canada and $59 in Germany. Its weight-loss drug Wegovy is priced for $1,349 per month in the US compared to $140 in Germany and $92 in the UK. In July 2024, US President Joe Biden joined Sanders in stating "Novo Nordisk and Eli Lilly must stop ripping off Americans with high drug prices."
In September 2024, CEO Lars Fruergaard Jørgensen was summoned to testify to the US Senate Health, Education, Labor and Pensions Committee at a hearing in Washington DC. During the hearing Senator Bernie Sanders told the Novo Nordisk CEO, "Stop Ripping Us Off."
Sponsorships and pitchpeople
Novo Nordisk has sponsored athletes with diabetes, such as Charlie Kimball in auto racing and Team Novo Nordisk in road cycling.
As of the 2010s, Anthony Anderson (star of Black-ish) serves as a pitchman for Novo Nordisk, and featured in the company's television advertisements which aired in the US.
See also
Captain Novolin
NNIT (formerly Novo Nordisk IT)
Novo Nordisk Foundation
Novo Nordisk Foundation Center for Protein Research
Repaglinide
Team Novo Nordisk
References
External links
Novo Nordisk Inc
Novo Nordisk Pharmatech A/S
Novo Nordisk
1923 establishments in Denmark
Biotechnology companies of Denmark
Companies based in Gladsaxe Municipality
Companies listed on Nasdaq Copenhagen
Companies listed on the New York Stock Exchange
Companies in the OMX Nordic 40
Companies in the S&P Europe 350 Dividend Aristocrats
Danish brands
Danish companies established in 1923
Health care companies of Denmark
Life science companies based in Copenhagen
Life sciences industry
Pharmaceutical companies established in 1923
Pharmaceutical companies of Denmark
Companies in the OMX Copenhagen 25 | Novo Nordisk | Biology | 3,672 |
4,292 | https://en.wikipedia.org/wiki/Base%20pair | A base pair (bp) is a fundamental unit of double-stranded nucleic acids consisting of two nucleobases bound to each other by hydrogen bonds. They form the building blocks of the DNA double helix and contribute to the folded structure of both DNA and RNA. Dictated by specific hydrogen bonding patterns, "Watson–Crick" (or "Watson–Crick–Franklin") base pairs (guanine–cytosine and adenine–thymine) allow the DNA helix to maintain a regular helical structure that is subtly dependent on its nucleotide sequence. The complementary nature of this based-paired structure provides a redundant copy of the genetic information encoded within each strand of DNA. The regular structure and data redundancy provided by the DNA double helix make DNA well suited to the storage of genetic information, while base-pairing between DNA and incoming nucleotides provides the mechanism through which DNA polymerase replicates DNA and RNA polymerase transcribes DNA into RNA. Many DNA-binding proteins can recognize specific base-pairing patterns that identify particular regulatory regions of genes.
Intramolecular base pairs can occur within single-stranded nucleic acids. This is particularly important in RNA molecules (e.g., transfer RNA), where Watson–Crick base pairs (guanine–cytosine and adenine–uracil) permit the formation of short double-stranded helices, and a wide variety of non–Watson–Crick interactions (e.g., G–U or A–A) allow RNAs to fold into a vast range of specific three-dimensional structures. In addition, base-pairing between transfer RNA (tRNA) and messenger RNA (mRNA) forms the basis for the molecular recognition events that result in the nucleotide sequence of mRNA becoming translated into the amino acid sequence of proteins via the genetic code.
The size of an individual gene or an organism's entire genome is often measured in base pairs because DNA is usually double-stranded. Hence, the number of total base pairs is equal to the number of nucleotides in one of the strands (with the exception of non-coding single-stranded regions of telomeres). The haploid human genome (23 chromosomes) is estimated to be about 3.2 billion base pairs long and to contain 20,000–25,000 distinct protein-coding genes. A kilobase (kb) is a unit of measurement in molecular biology equal to 1000 base pairs of DNA or RNA. The total number of DNA base pairs on Earth is estimated at 5.0 with a weight of 50 billion tonnes. In comparison, the total mass of the biosphere has been estimated to be as much as 4 TtC (trillion tons of carbon).
Hydrogen bonding and stability
Top, a G.C base pair with three hydrogen bonds. Bottom, an A.T base pair with two hydrogen bonds. Non-covalent hydrogen bonds between the bases are shown as dashed lines. The wiggly lines stand for the connection to the pentose sugar and point in the direction of the minor groove.
Hydrogen bonding is the chemical interaction that underlies the base-pairing rules described above. Appropriate geometrical correspondence of hydrogen bond donors and acceptors allows only the "right" pairs to form stably. DNA with high GC-content is more stable than DNA with low GC-content. Crucially, however, stacking interactions are primarily responsible for stabilising the double-helical structure; Watson-Crick base pairing's contribution to global structural stability is minimal, but its role in the specificity underlying complementarity is, by contrast, of maximal importance as this underlies the template-dependent processes of the central dogma (e.g. DNA replication).
The bigger nucleobases, adenine and guanine, are members of a class of double-ringed chemical structures called purines; the smaller nucleobases, cytosine and thymine (and uracil), are members of a class of single-ringed chemical structures called pyrimidines. Purines are complementary only with pyrimidines: pyrimidine–pyrimidine pairings are energetically unfavorable because the molecules are too far apart for hydrogen bonding to be established; purine–purine pairings are energetically unfavorable because the molecules are too close, leading to overlap repulsion. Purine–pyrimidine base-pairing of AT or GC or UA (in RNA) results in proper duplex structure. The only other purine–pyrimidine pairings would be AC and GT and UG (in RNA); these pairings are mismatches because the patterns of hydrogen donors and acceptors do not correspond. The GU pairing, with two hydrogen bonds, does occur fairly often in RNA (see wobble base pair).
Paired DNA and RNA molecules are comparatively stable at room temperature, but the two nucleotide strands will separate above a melting point that is determined by the length of the molecules, the extent of mispairing (if any), and the GC content. Higher GC content results in higher melting temperatures; it is, therefore, unsurprising that the genomes of extremophile organisms such as Thermus thermophilus are particularly GC-rich. On the converse, regions of a genome that need to separate frequently — for example, the promoter regions for often-transcribed genes — are comparatively GC-poor (for example, see TATA box). GC content and melting temperature must also be taken into account when designing primers for PCR reactions.
Examples
The following DNA sequences illustrate pair double-stranded patterns. By convention, the top strand is written from the 5′-end to the 3′-end; thus, the bottom strand is written 3′ to 5′.
A base-paired DNA sequence:
The corresponding RNA sequence, in which uracil is substituted for thymine in the RNA strand:
Base analogs and intercalators
Chemical analogs of nucleotides can take the place of proper nucleotides and establish non-canonical base-pairing, leading to errors (mostly point mutations) in DNA replication and DNA transcription. This is due to their isosteric chemistry. One common mutagenic base analog is 5-bromouracil, which resembles thymine but can base-pair to guanine in its enol form.
Other chemicals, known as DNA intercalators, fit into the gap between adjacent bases on a single strand and induce frameshift mutations by "masquerading" as a base, causing the DNA replication machinery to skip or insert additional nucleotides at the intercalated site. Most intercalators are large polyaromatic compounds and are known or suspected carcinogens. Examples include ethidium bromide and acridine.
Mismatch repair
Mismatched base pairs can be generated by errors of DNA replication and as intermediates during homologous recombination. The process of mismatch repair ordinarily must recognize and correctly repair a small number of base mispairs within a long sequence of normal DNA base pairs. To repair mismatches formed during DNA replication, several distinctive repair processes have evolved to distinguish between the template strand and the newly formed strand so that only the newly inserted incorrect nucleotide is removed (in order to avoid generating a mutation). The proteins employed in mismatch repair during DNA replication, and the clinical significance of defects in this process are described in the article DNA mismatch repair. The process of mispair correction during recombination is described in the article gene conversion.
Length measurements
The following abbreviations are commonly used to describe the length of a D/RNA molecule:
bp = base pair—one bp corresponds to approximately 3.4 Å (340 pm) of length along the strand, and to roughly 618 or 643 daltons for DNA and RNA respectively.
kb (= kbp) = kilo–base-pair = 1,000 bp
Mb (= Mbp) = mega–base-pair = 1,000,000 bp
Gb (= Gbp) = giga–base-pair = 1,000,000,000 bp
For single-stranded DNA/RNA, units of nucleotides are used—abbreviated nt (or knt, Mnt, Gnt)—as they are not paired.
To distinguish between units of computer storage and bases, kbp, Mbp, Gbp, etc. may be used for base pairs.
The centimorgan is also often used to imply distance along a chromosome, but the number of base pairs it corresponds to varies widely. In the human genome, the centimorgan is about 1 million base pairs.
Unnatural base pair (UBP)
An unnatural base pair (UBP) is a designed subunit (or nucleobase) of DNA which is created in a laboratory and does not occur in nature. DNA sequences have been described which use newly created nucleobases to form a third base pair, in addition to the two base pairs found in nature, A-T (adenine – thymine) and G-C (guanine – cytosine). A few research groups have been searching for a third base pair for DNA, including teams led by Steven A. Benner, Philippe Marliere, Floyd E. Romesberg and Ichiro Hirao. Some new base pairs based on alternative hydrogen bonding, hydrophobic interactions and metal coordination have been reported.
In 1989 Steven Benner (then working at the Swiss Federal Institute of Technology in Zurich) and his team led with modified forms of cytosine and guanine into DNA molecules in vitro. The nucleotides, which encoded RNA and proteins, were successfully replicated in vitro. Since then, Benner's team has been trying to engineer cells that can make foreign bases from scratch, obviating the need for a feedstock.
In 2002, Ichiro Hirao's group in Japan developed an unnatural base pair between 2-amino-8-(2-thienyl)purine (s) and pyridine-2-one (y) that functions in transcription and translation, for the site-specific incorporation of non-standard amino acids into proteins. In 2006, they created 7-(2-thienyl)imidazo[4,5-b]pyridine (Ds) and pyrrole-2-carbaldehyde (Pa) as a third base pair for replication and transcription. Afterward, Ds and 4-[3-(6-aminohexanamido)-1-propynyl]-2-nitropyrrole (Px) was discovered as a high fidelity pair in PCR amplification. In 2013, they applied the Ds-Px pair to DNA aptamer generation by in vitro selection (SELEX) and demonstrated the genetic alphabet expansion significantly augment DNA aptamer affinities to target proteins.
In 2012, a group of American scientists led by Floyd Romesberg, a chemical biologist at the Scripps Research Institute in San Diego, California, published that his team designed an unnatural base pair (UBP). The two new artificial nucleotides or Unnatural Base Pair (UBP) were named d5SICS and dNaM. More technically, these artificial nucleotides bearing hydrophobic nucleobases, feature two fused aromatic rings that form a (d5SICS–dNaM) complex or base pair in DNA. His team designed a variety of in vitro or "test tube" templates containing the unnatural base pair and they confirmed that it was efficiently replicated with high fidelity in virtually all sequence contexts using the modern standard in vitro techniques, namely PCR amplification of DNA and PCR-based applications. Their results show that for PCR and PCR-based applications, the d5SICS–dNaM unnatural base pair is functionally equivalent to a natural base pair, and when combined with the other two natural base pairs used by all organisms, A–T and G–C, they provide a fully functional and expanded six-letter "genetic alphabet".
In 2014 the same team from the Scripps Research Institute reported that they synthesized a stretch of circular DNA known as a plasmid containing natural T-A and C-G base pairs along with the best-performing UBP Romesberg's laboratory had designed and inserted it into cells of the common bacterium E. coli that successfully replicated the unnatural base pairs through multiple generations. The transfection did not hamper the growth of the E. coli cells and showed no sign of losing its unnatural base pairs to its natural DNA repair mechanisms. This is the first known example of a living organism passing along an expanded genetic code to subsequent generations. Romesberg said he and his colleagues created 300 variants to refine the design of nucleotides that would be stable enough and would be replicated as easily as the natural ones when the cells divide. This was in part achieved by the addition of a supportive algal gene that expresses a nucleotide triphosphate transporter which efficiently imports the triphosphates of both d5SICSTP and dNaMTP into E. coli bacteria. Then, the natural bacterial replication pathways use them to accurately replicate a plasmid containing d5SICS–dNaM. Other researchers were surprised that the bacteria replicated these human-made DNA subunits.
The successful incorporation of a third base pair is a significant breakthrough toward the goal of greatly expanding the number of amino acids which can be encoded by DNA, from the existing 20 amino acids to a theoretically possible 172, thereby expanding the potential for living organisms to produce novel proteins. The artificial strings of DNA do not encode for anything yet, but scientists speculate they could be designed to manufacture new proteins which could have industrial or pharmaceutical uses. Experts said the synthetic DNA incorporating the unnatural base pair raises the possibility of life forms based on a different DNA code.
Non-canonical base pairing
In addition to the canonical pairing, some conditions can also favour base-pairing with alternative base orientation, and number and geometry of hydrogen bonds. These pairings are accompanied by alterations to the local backbone shape.
The most common of these is the wobble base pairing that occurs between tRNAs and mRNAs at the third base position of many codons during transcription and during the charging of tRNAs by some tRNA synthetases. They have also been observed in the secondary structures of some RNA sequences.
Additionally, Hoogsteen base pairing (typically written as A•U/T and G•C) can exist in some DNA sequences (e.g. CA and TA dinucleotides) in dynamic equilibrium with standard Watson–Crick pairing. They have also been observed in some protein–DNA complexes.
In addition to these alternative base pairings, a wide range of base-base hydrogen bonding is observed in RNA secondary and tertiary structure. These bonds are often necessary for the precise, complex shape of an RNA, as well as its binding to interaction partners.
See also
List of Y-DNA single-nucleotide polymorphisms
Non-canonical base pairing
Chargaff's rules
References
Further reading
(See esp. ch. 6 and 9)
External links
DAN—webserver version of the EMBOSS tool for calculating melting temperatures
Nucleobases
Molecular genetics
Nucleic acids | Base pair | Chemistry,Biology | 3,207 |
190,933 | https://en.wikipedia.org/wiki/Sunset | Sunset (or sundown) is the disappearance of the Sun at the end of the Sun path, below the horizon of the Earth (or any other astronomical object in the Solar System) due to its rotation. As viewed from everywhere on Earth, it is a phenomenon that happens approximately once every 24 hours, except in areas close to the poles. The equinox Sun sets due west at the moment of both the spring and autumn equinoxes. As viewed from the Northern Hemisphere, the Sun sets to the northwest (or not at all) in the spring and summer, and to the southwest in the autumn and winter; these seasons are reversed for the Southern Hemisphere.
The time of actual sunset is defined in astronomy as two minutes before the upper limb of the Sun disappears below the horizon. Near the horizon, atmospheric refraction causes sunlight rays to be distorted to such an extent that geometrically the solar disk is already about one diameter below the horizon when a sunset is observed.
Sunset is distinct from twilight, which is divided into three stages. The first one is civil twilight, which begins once the Sun has disappeared below the horizon, and continues until it descends to 6 degrees below the horizon. The early to intermediate stages of twilight coincide with predusk. The second phase is nautical twilight, between 6 and 12 degrees below the horizon. The third phase is astronomical twilight, which is the period when the Sun is between 12 and 18 degrees below the horizon. Dusk is at the very end of astronomical twilight, and is the darkest moment of twilight just before night. Finally, night occurs when the Sun reaches 18 degrees below the horizon and no longer illuminates the sky.
Locations further north than the Arctic Circle and further south than the Antarctic Circle experience no full sunset or sunrise on at least one day of the year, when the polar day or the polar night persists continuously for 24 hours. At latitudes greater than within half a degree of either pole, the sun cannot rise or set on the same date on any day of the year, since the sun's angular elevation between solar noon and midnight is less than one degree.
Occurrence
The time of sunset varies throughout the year and is determined by the viewer's position on Earth, specified by latitude and longitude, altitude, and time zone. Small daily changes and noticeable semi-annual changes in the timing of sunsets are driven by the axial tilt of the Earth, daily rotation of the Earth, the planet's movement in its annual elliptical orbit around the Sun, and the Earth and Moon's paired revolutions around each other. During winter and spring, the days get longer and sunsets occur later every day until the day of the latest sunset, which occurs after the summer solstice. In the Northern Hemisphere, the latest sunset occurs late in June or in early July, but not on the summer solstice of June 21. This date depends on the viewer's latitude (connected with the Earth's slower movement around the aphelion around July 4). Likewise, the earliest sunset does not occur on the winter solstice, but rather about two weeks earlier, again depending on the viewer's latitude. In the Northern Hemisphere, it occurs in early December or late November (influenced by the Earth's faster movement near its perihelion, which occurs around January 3).
Likewise, the same phenomenon exists in the Southern Hemisphere, but with the respective dates reversed, with the earliest sunsets occurring some time before June 21 in winter, and the latest sunsets occurring some time after December 21 in summer, again depending on one's southern latitude. For a few weeks surrounding both solstices, both sunrise and sunset get slightly later each day. Even on the equator, sunrise and sunset shift several minutes back and forth through the year, along with solar noon. These effects are plotted by an analemma.
Neglecting atmospheric refraction and the Sun's non-zero size, whenever and wherever sunset occurs, it is always in the northwest quadrant from the March equinox to the September equinox, and in the southwest quadrant from the September equinox to the March equinox. Sunsets occur almost exactly due west on the equinoxes for all viewers on Earth. Exact calculations of the azimuths of sunset on other dates are complex, but they can be estimated with reasonable accuracy by using the analemma.
As sunrise and sunset are calculated from the leading and trailing edges of the Sun, respectively, and not the center, the duration of a daytime is slightly longer than nighttime (by about 10 minutes, as seen from temperate latitudes). Further, because the light from the Sun is refracted as it passes through the Earth's atmosphere, the Sun is still visible after it is geometrically below the horizon. Refraction also affects the apparent shape of the Sun when it is very close to the horizon. It makes things appear higher in the sky than they really are. Light from the bottom edge of the Sun's disk is refracted more than light from the top, since refraction increases as the angle of elevation decreases. This raises the apparent position of the bottom edge more than the top, reducing the apparent height of the solar disk. Its width is unaltered, so the disk appears wider than it is high. (In reality, the Sun is almost exactly spherical.) The Sun also appears larger on the horizon, an optical illusion, similar to the moon illusion.
Locations within the Arctic and Antarctic Circles experience periods where the Sun does not rise or set for 24 hours or more, known as polar day and polar night. These phenomena occur due to Earth’s axial tilt, causing continuous sunlight or darkness at certain times of the year.
Location on the horizon
Approximate locations of sunset on the horizon (azimuth) as described above can be found in Refs.
The figure on the right is calculated using the solar geometry routine as follows:
For a given latitude and a given date, calculate the declination of the Sun using longitude and solar noon time as inputs to the routine;
Calculate the sunset hour angle using the sunset equation;
Calculate the sunset time, which is the solar noon time plus the sunset hour angle in degree divided by 15;
Use the sunset time as input to the solar geometry routine to get the solar azimuth angle at sunset.
An interesting feature in the figure on the right is apparent hemispheric symmetry in regions where daily sunrise and sunset actually occur. This symmetry becomes clear if the hemispheric relation in sunrise equation is applied to the x- and y-components of the solar vector presented in Ref. Solar geometry routines that model solar azimuth angles at sunset permit the calculation using latitude, date, and time parameters to be done precisely.
Colors
As a ray of white sunlight travels through the atmosphere to an observer, some of the colors are scattered out of the beam by air molecules and airborne particles, changing the final color of the beam the viewer sees.
Because the shorter wavelength components, such as blue and green, scatter more strongly, these colors are preferentially removed from the beam. At sunrise and sunset, when the path through the atmosphere is longer, the blue and green components are removed almost completely, leaving the longer wavelength orange and red hues we see at those times. The remaining reddened sunlight can then be scattered by cloud droplets and other relatively large particles to light up the horizon red and orange. The removal of the shorter wavelengths of light is due to Rayleigh scattering by air molecules and particles much smaller than the wavelength of visible light (less than 50 nm in diameter). The scattering by cloud droplets and other particles with diameters comparable to or larger than the sunlight's wavelengths (> 600 nm) is due to Mie scattering and is not strongly wavelength-dependent. Mie scattering is responsible for the light scattered by clouds, and also for the daytime halo of white light around the Sun (forward scattering of white light).
Sunset colors are typically more brilliant than sunrise colors, because the evening air contains more particles than morning air. Sometimes just before sunrise or after sunset a green flash can be seen.
Ash from volcanic eruptions, trapped within the troposphere, tends to mute sunset and sunrise colors, while volcanic ejecta that is instead lofted into the stratosphere (as thin clouds of tiny sulfuric acid droplets), can yield beautiful post-sunset colors called afterglows and pre-sunrise glows. A number of eruptions, including those of Mount Pinatubo in 1991 and Krakatoa in 1883, have produced sufficiently high stratus clouds containing sulfuric acid to yield remarkable sunset afterglows (and pre-sunrise glows) around the world. The high-altitude clouds serve to reflect strongly reddened sunlight still striking the stratosphere after sunset, down to the surface.
Some of the most varied colors at sunset can be found in the opposite or eastern sky after the Sun has set during twilight. Depending on weather conditions and the types of clouds present, these colors have a wide spectrum, and can produce unusual results.
Names of compass points
In some languages, points of the compass bear names etymologically derived from words for sunrise and sunset. The English words "orient" and "occident", meaning "east" and "west", respectively, are descended from Latin words meaning "sunrise" and "sunset". The word "levant", related e.g. to French "(se) lever" meaning "lift" or "rise" (and also to English "elevate"), is also used to describe the east. In Polish, the word for east wschód (vskhud), is derived from the morpheme "ws" – meaning "up", and "chód" – signifying "move" (from the verb chodzić – meaning "walk, move"), due to the act of the Sun coming up from behind the horizon. The Polish word for west, zachód (zakhud), is similar but with the word "za" at the start, meaning "behind", from the act of the Sun going behind the horizon. In Russian, the word for west, запад (zapad), is derived from the words за – meaning "behind", and пад – signifying "fall" (from the verb падать – padat'), due to the act of the Sun falling behind the horizon. In Hebrew, the word for east is 'מזרח', which derives from the word for rising, and the word for west is 'מערב', which derives from the word for setting.
Historical view
The 16th-century astronomer Nicolaus Copernicus was the first to present to the world a detailed and eventually widely accepted mathematical model supporting the premise that the Earth is moving and the Sun actually stays still, despite the impression from our point of view of a moving Sun.
Planets
Sunsets on other planets appear different because of differences in the distance of the planet from the Sun and non-existent or differing atmospheric compositions.
Mars
On Mars, the setting Sun appears about two-thirds the size it does from Earth, due to the greater distance between Mars and the Sun. The colors are typically hues of blue, but some Martian sunsets last significantly longer and appear far redder than is typical on Earth.
The colors of the Martian sunset differ from those on Earth. Mars has a thin atmosphere, lacking oxygen and nitrogen, so the light scattering is not dominated by a Rayleigh Scattering process. Instead, the air is full of red dust, blown into the atmosphere by high winds, so its sky color is mainly determined by a Mie Scattering process, resulting in more blue hues than an Earth sunset. One study also reported that Martian dust high in the atmosphere can reflect sunlight up to two hours after the Sun has set, casting a diffuse glow across the surface of Mars.
See also
Dawn
Diffuse sky radiation
Earth's shadow, visible at sunset
Golden hour (photography)
Heliacal setting
Sundown town
References
External links
Full physical explanation in simple terms
The Colors of Twilight and Sunset
The Science of Sunsets
The Physics of Sunsets - More detailed explanation including the role of clouds
Geolocation service to calculate the time of sunrise and sunset
Earth phenomena
Parts of a day
Solar phenomena
Daily events
Evening | Sunset | Physics,Technology | 2,532 |
29,041,848 | https://en.wikipedia.org/wiki/Interlocking%20interval%20topology | In mathematics, and especially general topology, the interlocking interval topology is an example of a topology on the set , i.e. the set of all positive real numbers that are not positive whole numbers.
Construction
The open sets in this topology are taken to be the whole set S, the empty set ∅, and the sets generated by
The sets generated by Xn will be formed by all possible unions of finite intersections of the Xn.
See also
List of topologies
References
General topology
Topological spaces | Interlocking interval topology | Mathematics | 102 |
7,645,483 | https://en.wikipedia.org/wiki/Perisinusoidal%20space | The perisinusoidal space (or space of Disse) is a space between a hepatocyte, and a sinusoid in the liver. It contains the blood plasma. Microvilli of hepatocytes extend into this space, allowing proteins and other plasma components from the sinusoids to be absorbed by the hepatocytes. Fenestration and discontinuity of the sinusoid endothelium facilitates this transport. The perisinusoidal space also contains hepatic stellate cells (also known as Ito cells or lipocytes), which store vitamin A in characteristic lipid droplets.
This space may be obliterated in liver disease, leading to decreased uptake by hepatocytes of nutrients and wastes such as bilirubin.
The Space of Disse is named for the German anatomist Joseph Disse (1852–1912).
Pathophysiology
Fibrosis
Liver injury from a number of causes can activate the hepatic stellate cells into transdifferentiated and prolific myofibroblasts. The myofibroblasts synthesize and secrete components of the extracellular matrix including collagen into the perisinusoidal space. This in turn promotes the development of fibrosis, and continuing fibrosis is thought to be responsible for the development of cirrhosis, and liver cancer.
References
External links
- "Ultrastructure of the Cell: hepatocytes and sinusoids, sinusoid and space of Disse"
- "Liver, Gall Bladder, and Pancreas: liver; sinusoids and Kupffer cells"
- "Mammal, liver (EM, Low)"
Liver anatomy
Histology | Perisinusoidal space | Chemistry | 361 |
26,976,028 | https://en.wikipedia.org/wiki/ReDoS | A regular expression denial of service (ReDoS)
is an algorithmic complexity attack that produces a denial-of-service by providing a regular expression and/or an input that takes a long time to evaluate. The attack exploits the fact that many regular expression implementations have super-linear worst-case complexity; on certain regex-input pairs, the time taken can grow polynomially or exponentially in relation to the input size. An attacker can thus cause a program to spend substantial time by providing a specially crafted regular expression and/or input. The program will then slow down or become unresponsive.
Description
Regular expression ("regex") matching can be done by building a finite-state automaton. Regex can be easily converted to nondeterministic automata (NFAs), in which for each state and input symbol, there may be several possible next states. After building the automaton, several possibilities exist:
the engine may convert it to a deterministic finite-state automaton (DFA) and run the input through the result;
the engine may try one by one all the possible paths until a match is found or all the paths are tried and fail ("backtracking").
the engine may consider all possible paths through the nondeterministic automaton in parallel;
the engine may convert the nondeterministic automaton to a DFA lazily (i.e., on the fly, during the match).
Of the above algorithms, the first two are problematic. The first is problematic because a deterministic automaton could have up to states where is the number of states in the nondeterministic automaton; thus, the conversion from NFA to DFA may take exponential time. The second is problematic because a nondeterministic automaton could have an exponential number of paths of length , so that walking through an input of length will also take exponential time.
The last two algorithms, however, do not exhibit pathological behavior.
Note that for non-pathological regular expressions, the problematic algorithms are usually fast, and in practice, one can expect them to "compile" a regex in O(m) time and match it in O(n) time; instead, simulation of an NFA and lazy computation of the DFA have O(m2n) worst-case complexity. Regex denial of service occurs when these expectations are applied to a regex provided by the user, and malicious regular expressions provided by the user trigger the worst-case complexity of the regex matcher.
While regex algorithms can be written in an efficient way, most regex engines in existence extend the regex languages with additional constructs that cannot always be solved efficiently. Such extended patterns essentially force the implementation of regex in most programming languages to use backtracking.
Examples
Exponential backtracking
The most severe type of problem happens with backtracking regular expression matches, where some patterns have a runtime that is exponential in the length of the input string. For strings of characters, the runtime is . This happens when a regular expression has three properties:
the regular expression applies repetition (+, *) to a subexpression;
the subexpression can match the same input in multiple ways, or the subexpression can match an input string which is a prefix of a longer possible match;
and after the repeated subexpression, there is an expression that matches something which the subexpression does not match.
The second condition is best explained with two examples:
in (a|a)+$, repetition is applied to the subexpression a|a, which can match a in two ways on each side of the alternation.
in (a+)*$, repetition is applied to the subexpression a+, which can match a or aa, etc.
In both of these examples we used $ to match the end of the string, satisfying the third condition, but it is also possible to use another character for this. For example (a|aa)*c has the same problematic structure.
All three of the above regular expressions will exhibit exponential runtime when applied to strings of the form . For example, if you try to match them against aaaaaaaaaaaaaaaaaaaaaaaa! on a backtracking expression engine, it will take a significantly long time to complete, and the runtime will approximately double for each extra a before the !.
It is also possible to have backtracking which is polynomial time , instead of exponential. This can also cause problems for long enough inputs, though less attention has been paid to this problem as malicious input must be much longer to have a significant effect. An example of such a pattern is "a*b?a*x", when the input is an arbitrarily long sequence of "a"s.
Vulnerable regexes in online repositories
So-called "evil" or vulnerable regexes have been found in online regular expression repositories. Note that it is enough to find a vulnerable subexpression in order to attack the full regex:
RegExLib, id=1757 (email validation) - see part^([a-zA-Z0-9])(@){1}[a-z0-9]+[.]{1}(([a-z]{2,3})|([a-z]{2,3}[.]{1}[a-z]{2,3}))$
OWASP Validation Regex Repository, Java Classname - see part^[A-Z]([a-z])+$
These two examples are also susceptible to the input aaaaaaaaaaaaaaaaaaaaaaaa!.
Attacks
If the regex itself is affected by user input, such as a web service permitting clients to provide a search pattern, then an attacker can inject a malicious regex to consume the server's resources. Therefore, in most cases, regular expression denial of service can be avoided by removing the possibility for the user to execute arbitrary patterns on the server. In this case, web applications and databases are the main vulnerable applications. Alternatively, a malicious page could hang the user's web browser or cause it to use arbitrary amounts of memory.
However, if a vulnerable regex exists on the server-side already, then an attacker may instead be able to provide an input that triggers its worst-case behavior. In this case, e-mail scanners and intrusion detection systems could also be vulnerable.
In the case of a web application, the programmer may use the same regular expression to validate input on both the client and the server side of the system. An attacker could inspect the client code, looking for evil regular expressions, and send crafted input directly to the web server in order to hang it.
Mitigation
ReDoS can be mitigated without changes to the regular expression engine, simply by setting a time limit for the execution of regular expressions when untrusted input is involved.
ReDoS can be avoided entirely by using a non-vulnerable regular expression implementation. After CloudFlare's web application firewall (WAF) was brought down by a PCRE ReDoS in 2019, the company rewrote its WAF to use the non-backtracking Rust regex library, using an algorithm similar to RE2.
Vulnerable regular expressions can be detected programmatically by a linter. Methods range from pure static analysis to fuzzing. In most cases, the problematic regular expressions can be rewritten as "non-evil" patterns. For example, (.*a)+ can be rewritten to ([^a]*a)+. Possessive matching and atomic grouping, which disable backtracking for parts of the expression, can also be used to "pacify" vulnerable parts.
See also
Denial-of-service attack
Cyberwarfare
Low Orbit Ion Cannon
High Orbit Ion Cannon
References
External links
Examples of ReDoS in open source applications:
ReDoS in DataVault (CVE-2009-3277)
ReDoS in EntLib (CVE-2009-3275)
ReDoS in NASD CORE.NET Terelik (CVE-2009-3276)
Some benchmarks for ReDoS
Achim Hoffman (2010). "ReDoS - benchmark for regular expression DoS in JavaScript". Retrieved 2010-04-19.
Richard M. Smith (2010). "Regular expression denial of service (ReDoS) attack test results". Retrieved 2010-04-19.
Algorithmic complexity attacks
Denial-of-service attacks
Pattern matching
Regular expressions | ReDoS | Technology | 1,799 |
17,219,062 | https://en.wikipedia.org/wiki/NGC%201058 | NGC 1058 is a Seyfert Type 2 galaxy in the NGC 1023 Group, located in the Perseus constellation. It is approximately 27.4 million light years from Earth and has an apparent magnitude of 11.82. It is receding from Earth at , and at relative to the Milky Way.
Supernovae
Three supernovae have been observed in NGC 1058:
SN 1961V (type II-P or possibly type LBV, mag. 12.2) was discovered by Paul Wild on 11 July 1961.
SN 1969L (typeII, mag. 12.8) was discovered by Leonida Rosino on 2 December 1969.
SN 2007gr (type Ib/c, mag. 13.8) was discovered by the Lick Observatory Supernova Search (LOSS) on 15 August 2007.
Image gallery
References
External links
Unbarred spiral galaxies
1058
Perseus (constellation)
02193
10314
NGC 1023 Group
Seyfert galaxies | NGC 1058 | Astronomy | 204 |
1,117,992 | https://en.wikipedia.org/wiki/Dental%20porcelain | Dental porcelain (also known as dental ceramic) is a dental material used by dental technicians to create biocompatible lifelike dental restorations, such as crowns, bridges, and veneers. Evidence suggests they are an effective material as they are biocompatible, aesthetic, insoluble and have a hardness of 7 on the Mohs scale. For certain dental prostheses, such as three-unit molars porcelain fused to metal or in complete porcelain group, zirconia-based restorations are recommended.
The word "ceramic" is derived from the Greek word keramos, meaning "potter's clay". It came from the ancient art of fabricating pottery where mostly clay was fired to form a hard, brittle object; a more modern definition is a material that contains metallic and non-metallic elements (usually oxygen). These materials can be defined by their inherent properties including their hard, stiff, and brittle nature due to the structure of their inter-atomic bonding, which is both ionic and covalent. In contrast, metals are non-brittle (display elastic behavior), and ductile (display plastic behaviour) due to the nature of their inter-atomic metallic bond. These bonds are defined by a cloud of shared electrons with the ability to move easily when energy is applied. Ceramics can vary in opacity from very translucent to very opaque. In general, the more glassy the microstructure (i.e. noncrystalline) the more translucent it will appear, and the more crystalline, the more opaque.
Composition
Ceramic used in dental application differs in composition from conventional ceramic to achieve optimum aesthetic components such as translucency.
As example the composition of dental feldspathic porcelain is as follows:
Kaolin 3-5%
Quartz (silica) 12-25%
Feldspar 70-85%
Metallic colourants 1%
Glass up to 15%
Classification
Ceramics can be classified based on the following:
Classification by Microstructure
At the microstructural level, ceramics can be defined by the nature of their composition of amorphous-to-crystalline ratio. There can be an infinite variability of the microstructures of materials, but they can be broken down into four basic compositional categories, with a few subgroups:
Composition category 1 – glass-based systems (mainly silica), example is the feldspathic porcelain.
Composition category 2 – glass-based systems (mainly silica) with fillers, usually crystalline (typically leucite or, more recently, lithium disilicate)
Composition category 3 – crystalline-based systems with glass fillers (mainly alumina)
Composition category 4 – polycrystalline solids (alumina and zirconia).
Dental ceramic is generally regarded as biologically inert. However, other toxicities may exist from depleted uranium as well as some of the other accessory materials; in addition, the restoration may increase wear on opposing teeth.
Classification by Processing Technique
Powder/liquid, glass-based systems
Machinable or pressable blocks of glass-based systems
CAD/CAM or slurry, die-processed, mostly crystalline systems
Classification of crystalline ceramics
Types of dental ceramics
The range of dental ceramics determined by their respective firing temperatures are:
Ultra-low
Fired below 850 °C - mainly used for shoulder ceramics (aims to combat the problem of shrinkage, specifically at the margins of the prep, when the early sintered ceramic state is fired to produce the final restoration), to correct minor defects and to add colour/shading to restorations
Low fusing
Fired between 850 and 950 °C - to prevent the occurrence of distortion, this type of ceramic should not be subjected to multiple firings
Higher fusing
This type is used mainly for denture teeth
Laboratory procedure
The dentist will usually specify a shade or combination of shades for different parts of the restoration, which in turn corresponds to a set of samples containing the porcelain powder. There are two types of porcelain restorations:
Porcelain fused to metal
Complete porcelain
Ceramic restorations can be built on a refractory die, which is a reproduction of a prepared tooth made of a strong material with the ability to withstand high temperatures, or it can be constructed on a metal coping or core.
For ceramic fused to metal restorations, the black color of metal is first masked with an opaque layer giving it a shade of white before consecutive layers are built up. The powder corresponding to the desired shade of dentine base is mixed with water before it is fired. Further layers are built up to mimic the natural translucency of the enamel of the tooth. The porcelain is fused to a semi-precious metal or precious metal, such as gold, for extra strength.
Systems which use an aluminium oxide, zirconium oxide or zirconia core instead of metal, produces complete porcelain restorations.
Firing
Once the mass has been built up, it is fired to allow fusion of the ceramic particles which in turn forms the completed restoration; the process by which this is done is referred to as ‘firing’.
The first firing forces water out and allows the particles to coalesce. During this initial process, a large amount of shrinkage occurs until the mass reaches an almost void-free state; to overcome this the mass is built-up to a size larger than the final restoration will be.
The mass is then left to cool slowly to prevent cracking and reduced strength of the final restoration.
Adding more layers to build up the restoration to the desired shape and/or size requires the ceramic to undergo further rounds of firing.
Staining
Ceramic can also be stained to show tooth morphology such as occlusal fissures and hypoplastic spots. These stains can be incorporated within the ceramic or applied onto the surface.
Glazing
Glazing is required to produce a smooth surface and it's the last stage of sealing the surface as it will fill porous areas and prevent wear on opposing teeth. Glazing can be achieved by re-firing the restoration, which fuses outer layers of the ceramic, or by using glazes with lower fusing temperatures; these are applied on the outer surface of the restoration in a thin layer. Any adjustments are then made with polishing rubbers and fine diamonds.
Use of CAD-CAM
Recent developments in CAD/CAM dentistry uses special partially sintered ceramic (zirconia), glass-bonded ceramic or glass-ceramic (lithium disilicate) formed into machinable blocks, which are fired again after machining.
By utilising in-office CAD/CAM technology, clinicians are able to design, fabricate and place all-ceramic inlays, onlays, crowns and veneers in a single patient visit. Ceramic restorations produced by this method have demonstrated excellent fit, strength and longevity. Two basic techniques can be used for CAD/CAM restorations:
Chairside single-visit technique
Integrated chairside–laboratory CAD/CAM procedure
Ceramic restorations
Ceramic restorations are indicated for most dental applications including:
Veneers
Inlays
Onlays
Crowns
Bridges
Implant supra- and sub-structures
Denture teeth
However, each system will have its own set of specific indications and contraindications which can be obtained from the manufacturer's guideline.
Contraindications for ceramic restorations
Ceramic restorations are contraindicated when a patient presents with the following:
Parafunction; individuals who suffer from bruxism or clenching
Short clinical crown
Immature teeth
Unfavourable occlusion
Supragingival preparations (when used alongside adhesive cements)
Other uses
Denture teeth
Poly(methyl methacrylate) (PMMA) is the material of choice for denture teeth, however ceramic denture teeth have been, and still are used for this purpose. The main benefit associated with the use of ceramic teeth is their superior wear resistance. There are however a number of disadvantages to using ceramic for denture teeth including their inability to form chemical bonds with the PMMA denture base; rather, ceramic teeth are attached to the base via mechanical retention which increases the chance of debonding during use over time. Additionally, they are more likely to fracture due to their brittle nature.
Endodontic posts
Ceramic can be used in the construction of non-metallic posts, however, it is a brittle material and as such may fracture within the root canal or may cause fracture of the root due to its increased strength. Another disadvantage is that once placed, removal may not be possible.
References
Dental materials
Porcelain | Dental porcelain | Physics | 1,744 |
34,709,142 | https://en.wikipedia.org/wiki/Intze%20principle | The Intze Principle () is a name given to two engineering principles, both named after the hydraulic engineer, Otto Intze, (1843–1904). In the one case, the Intze Principle relates to a type of water tower; in the other, a type of dam.
Intze Principle for water towers
A water tower built in accordance with the Intze Principle has a brick shaft on which the water tank sits. The base of the tank is fixed with a ring anchor (Ringanker) made of iron or steel, so that only vertical, not horizontal, forces are transmitted to the tower. Due to the lack of horizontal forces the tower shaft does not need to be quite as solidly built.
This type of design was used in Germany between 1885 and 1905.
Intze Principle for dams
The method of dam construction invented by Otto Intze was used in Germany at the end of the 19th and beginning of the 20th centuries. A dam built on the Intze Principle has the following features:
it is a gravity dam with an almost triangular cross-section
the wall is made of rubble stone with a high proportion of mortar
it has a curved ground plan
it has facing brickwork (Vorsatzmauerwerk or Verblendung) on the upper part of the upstream side
it has an earth embankment against the lower part of the upstream side, the so-called Intze Wedge (Intze-Keil)
it has a cement-sealed upstream face, coated with a layer of bitumen or tar
it has internal vertical drainage using clay pipes behind the upstream face
The purpose of the Intze Wedge is to provide an additional seal in the area of the highest water pressure. During the 1920s, this type of construction was gradually superseded by concrete dams or arched dams which were cheaper to build.
See also
List of dams in Germany
References
External links
Otto Intze and water tower construction
Otto Intze and dam construction
Intze tanks at wassertuerme.gmxhome.de
Hydraulic engineering
Water towers
Dams | Intze principle | Physics,Engineering,Environmental_science | 408 |
298,408 | https://en.wikipedia.org/wiki/Brownian%20tree | In probability theory, the Brownian tree, or Aldous tree, or Continuum Random Tree (CRT) is a random real tree that can be defined from a Brownian excursion. The Brownian tree was defined and studied by David Aldous in three articles published in 1991 and 1993. This tree has since then been generalized.
This random tree has several equivalent definitions and constructions: using sub-trees generated by finitely many leaves, using a Brownian excursion, Poisson separating a straight line or as a limit of Galton-Watson trees.
Intuitively, the Brownian tree is a binary tree whose nodes (or branching points) are dense in the tree; which is to say that for any distinct two points of the tree, there will always exist a node between them. It is a fractal object which can be approximated with computers or by physical processes with dendritic structures.
Definitions
The following definitions are different characterisations of a Brownian tree, they are taken from Aldous's three articles. The notions of leaf, node, branch, root are the intuitive notions on a tree (for details, see real trees).
Finite-dimensional laws
This definition gives the finite-dimensional laws of the subtrees generated by finitely many leaves.
Let us consider the space of all binary trees with leaves numbered from to . These trees have edges with lengths . A tree is then defined by its shape (which is to say the order of the nodes) and the edge lengths. We define a probability law of a random variable on this space by:
where .
In other words, depends not on the shape of the tree but rather on the total sum of all the edge lengths.
In other words, the Brownian tree is defined from the laws of all the finite sub-trees one can generate from it.
Continuous tree
The Brownian tree is a real tree defined from a Brownian excursion (see characterisation 4 in Real tree).
Let be a Brownian excursion. Define a pseudometric on with
for any
We then define an equivalence relation, noted on which relates all points such that .
is then a distance on the quotient space .
It is customary to consider the excursion rather than .
Poisson line-breaking construction
This is also called stick-breaking construction.
Consider a non-homogeneous Poisson point process with intensity . In other words, for any , is a Poisson variable with parameter . Let be the points of . Then the lengths of the intervals are exponential variables with decreasing means. We then make the following construction:
(initialisation) The first step is to pick a random point uniformly on the interval . Then we glue the segment to (mathematically speaking, we define a new distance). We obtain a tree with a root (the point 0), two leaves ( and ), as well as one binary branching point (the point ).
(iteration) At step , the segment is similarly glued to the tree , on a uniformly random point of .
This algorithm may be used to simulate numerically Brownian trees.
Limit of Galton-Watson trees
Consider a Galton-Watson tree whose reproduction law has finite non-zero variance, conditioned to have nodes. Let be this tree, with the edge lengths divided by . In other words, each edge has length . The construction can be formalized by considering the Galton-Watson tree as a metric space or by using renormalized contour processes.
Here, the limit used is the convergence in distribution of stochastic processes in the Skorokhod space (if we consider the contour processes) or the convergence in distribution defined from the Hausdorff distance (if we consider the metric spaces).
References
Wiener process
Fractals | Brownian tree | Mathematics | 755 |
5,671,511 | https://en.wikipedia.org/wiki/MS%200735.6%2B7421 | MS 0735.6+7421 is a galaxy cluster located in the constellation Camelopardalis, approximately 2.6 billion light-years away. It is notable as the location of one of the largest central galactic black holes in the known universe, which has also apparently produced one of the most powerful active galactic nucleus eruptions discovered.
In February 2020, it was reported that another similar but much more energetic AGN outburst - the Ophiuchus Supercluster eruption in the NeVe 1 galaxy, was five times the energy of MS 0735.6+7421.
Black hole eruption
Using data from the Chandra X-ray Observatory, scientists have deduced that an eruption has been occurring for the last 100 million years at the heart of the galaxy cluster, releasing as much energy over this time as hundreds of millions of gamma ray bursts. (The amount of energy released in a year is thus equivalent to several GRBs.) The remnants of the eruption are seen as two cavities on either side of a large central galaxy. If this outburst, with a total energy budget of more than 1055 J, was caused by a black hole accretion event, it must have consumed nearly 600 million solar masses.
Work done by Brian McNamara et al. (2008) point out the striking possibility that the outburst was not the result of an accretion event, but was instead powered by the rotation of the black hole. Moreover, the scientists mentioned the possibility that the central black hole in MS 0735.6+7421 could be one of the biggest black holes inhabiting the visible universe. This speculation is supported by the fact that the central cD Galaxy inside MS 0735.6+7421 possess the largest break radius known, as of today. With a calculated light deficit of more than 20 billion solar luminosities and an assumed light-to-mass ratio of 3, this yields a central black hole mass much above 10 billion solar masses, as far as the break radius was caused by the merger of several black holes in the past. In combination with the gargantuan energy outburst it is therefore very likely that MS 0735.6+7421 hosts a supermassive black hole in its core.
The cluster has a red shift of 64,800 ± 900 km/s and an apparent size of 25.
Newer calculations using the spheroidal luminosity of the central galaxy and the estimation of its break radius yielded black hole masses of 15.85 billion and 51.3 billion , respectively.
Brightest cluster galaxy
The brightest cluster galaxy in MS 0735.6+7421 is the elliptical galaxy, 4C +74.13. Known as LEDA 2760958, it is classified as a radio galaxy. With a diameter of around 400 kpc, the galaxy shows a steep spectrum radio source. The core of the 4C +74.13 has a spectrum index of α1400325 = -1.54, with its outer radio lobes found to measure α1400325 < -3.1. According to studies, it is evident that the core activity has recently restarted in a form of two inner lobes. It is also known to have ongoing star formation. With its stellar core estimating to be 3.8 kiloparsecs across, it is indicated 4C +74.13 might well contain an ultramassive black hole in its center.
X-ray source
Hot X-ray emitting gas pervades MS 0735.6+7421. Two vast cavities—each 600,000 ly in diameter—appear on opposite sides of a large galaxy at the center of the cluster. These cavities are filled with a two-sided, elongated, magnetized bubble of extremely high-energy electrons that emit radio waves.
See also
X-ray astronomy
Astrophysical X-ray source
AT 2021lwx
References
External links
Most Powerful Eruption In The Universe Discovered NASA/Marshall Space Flight Center (ScienceDaily) January 6, 2005
MS 0735.6+7421: Most Powerful Eruption in the Universe Discovered (CXO at Harvard)
Hungry for More (NASA)
Super-Super-massive Black Hole (Universetoday)
A site for the cluster
An Energetic AGN Outburst Powered by a Rapidly Spinning Supermassive Black Hole
Scientists Reveal Secrets to Burping Black Hole with the Green Bank Telescope
Galaxy clusters
Camelopardalis
X-ray astronomy
Astronomical X-ray sources | MS 0735.6+7421 | Astronomy | 907 |
2,543,904 | https://en.wikipedia.org/wiki/Mixed%20threat%20attack | Regarding computer security, a mixed threat attack is an attack that uses several different tactics to infiltrate a computer user's environment. A mixed threat attack might include an infected file that comes in by way of spam or can be received by an Internet download. Mixed threat attacks try to exploit multiple vulnerabilities to get into a system. By launching multiple diverse attacks in parallel, the attacker can exploit more entry points than with just a single attack.
Because these threats are based on multiple single-attacks, they are much harder to detect. Firewalls can help with these types of attacks; if configured correctly, they are somewhat effective against this type of attack. However, if the attack is embedded inside an application, it is no longer able to prevent it. Typical techniques employed are to define the multiple access threat with a signature that can represent identification for the virus removal software. These types of techniques need to be employed on the host machine because sometimes the firewall or Intrusion Detection System is not able to detect the attack.
Nimda and Code Red are examples of computer worms that utilized mixed threat attacks.
See also
Computer Security
References
Computer security exploits | Mixed threat attack | Technology | 232 |
9,759,038 | https://en.wikipedia.org/wiki/Banach%27s%20matchbox%20problem | Banach's match problem is a classic problem in probability attributed to Stefan Banach. Feller says that the problem was inspired by a humorous reference to Banach's smoking habit in a speech honouring him by Hugo Steinhaus, but that it was not Banach who set the problem or provided an answer.
Suppose a mathematician carries two matchboxes at all times: one in his left pocket and one in his right. Each time he needs a match, he is equally likely to take it from either pocket. Suppose he reaches into his pocket and discovers for the first time that the box picked is empty. If it is assumed that each of the matchboxes originally contained matches, what is the probability that there are exactly matches in the other box?
Solution
Without loss of generality consider the case where the matchbox in his right pocket has an unlimited number of matches and let be the number of matches removed from this one before the left one is found to be empty. When the left pocket is found to be empty, the man has chosen that pocket times. Then is the number of successes before failures in Bernoulli trials with , which has the negative binomial distribution and thus
.
Returning to the original problem, we see that the probability that the left pocket is found to be empty first is which equals because both are equally likely. We see that the number of matches remaining in the other pocket is
.
The expectation of the distribution is approximately . (This is shown using Stirling's approximation.) So starting with boxes with matches, the expected number of matches in the second box is .
See also
List of things named after Stefan Banach
References
External links
Java applet
Applied probability
Probability problems | Banach's matchbox problem | Mathematics | 344 |
11,570,273 | https://en.wikipedia.org/wiki/Inonotus%20dryophilus | Inonotus dryophilus is a plant pathogen.
References
External links
Index Fungorum
USDA ARS Fungal Database
Fungal plant pathogens and diseases
dryophilus
Fungi described in 1904
Fungus species | Inonotus dryophilus | Biology | 42 |
1,529,072 | https://en.wikipedia.org/wiki/European%20Space%20Astronomy%20Centre | The European Space Astronomy Centre (ESAC) near Madrid in Spain is a research centre of the European Space Agency (ESA). ESAC is the lead institution for space science (astronomy, Solar System exploration and fundamental physics) using ESA missions. It hosts the science operation centres for all ESA astronomy and planetary missions and their scientific data archives. ESA's Cebreros Station deep-space communication antennas are located nearby.
Location
ESAC is located near , within the municipal limits of Villanueva de la Cañada, is located 30 km west of Madrid in the Guadarrama Valley. The site is surrounded by light woodland and is adjacent to the ruins of the 15th-century .
Missions
Past and present missions handled from ESAC include (in alphabetical order): Akari, BepiColombo, Cassini–Huygens, Cluster, Exomars, Gaia, Herschel Space Observatory, Hubble Space Telescope, ISO, INTEGRAL, IUE, James Webb Space Telescope, LISA Pathfinder, Mars Express, Planck, Rosetta, SOHO, Solar Orbiter, Venus Express, and XMM-Newton.
Future missions include:Athena, Euclid, JUICE, and Plato.
In addition to deep space and solar system exploration, ESAC hosts the data processing of SMOS, a satellite observing the Earth, and the CESAR educational programme.
ESAC is also involved in ESA missions conducted in collaboration with other space agencies. One example is Akari, a Japanese-led mission to carry out an infrared sky survey, launched on 21 February 2006. Collaborative programmes include the NASA-led James Webb Space Telescope, the successor to the Hubble Space Telescope.
Communications
An ESA radio ground station for communication with spacecraft is located in Cebreros, Avila, about 90 km from Madrid and 65 km from ESAC. This installation provides essential support to the activities of ESAC. Inaugurated in September 2005, Cebreros has a 35-metre antenna used to communicate with distant missions to Mercury, Venus, Mars and beyond.
The Madrid Deep Space Communications Complex is also located nearby, operated by the Instituto Nacional de Técnica Aeroespacial. It is a station of the Deep Space Network used primarily for NASA missions, but sometimes supplements Cebreros in communicating with ESA spacecraft. It has a 70-metre antenna, six 34-m antennae and one 26-m antenna.
Two 15-metre radio antennae are located on the ESAC site, but were decommissioned in 2017.
Other facilities
ESAC also hosts a branch of the Spanish Astrobiology Center (CAB).
See also
Facilities of the European Space Agency
References
External links
European Space Astronomy Centre website
Astronomy in Europe
European Space Agency facilities
Space telescopes | European Space Astronomy Centre | Astronomy | 545 |
954,234 | https://en.wikipedia.org/wiki/Dwarf%20galaxy%20problem | The dwarf galaxy problem, also known as the missing satellites problem, arises from a mismatch between observed dwarf galaxy numbers and collisionless numerical cosmological simulations that predict the evolution of the distribution of matter in the universe. In simulations, dark matter clusters hierarchically, in ever increasing numbers of halo "blobs" as halos' components' sizes become smaller-and-smaller. However, although there seem to be enough observed normal-sized galaxies to match the simulated distribution of dark matter halos of comparable mass, the number of observed dwarf galaxies is orders of magnitude lower than expected from such simulation.
Context
For example, around 38 dwarf galaxies have been observed in the Local Group, and only around 11 orbiting the Milky Way, yet dark matter simulations predict that there should be around 500 dwarf satellites for the Milky Way alone.
Prospective resolution
There are two main alternatives which may resolve the dwarf galaxy problem: The smaller-sized clumps of dark matter may be unable to obtain or retain the baryonic matter needed to form stars in the first place; or, after they form, dwarf galaxies may be quickly “eaten” by the larger galaxies that they orbit.
Baryonic matter too sparse
One proposal is that the smaller halos do exist but that only a few of them end up becoming visible, because they are unable to acquire enough baryonic matter to form a visible dwarf galaxy. In support of this, in 2007 the Keck telescopes observed eight newly discovered ultra-faint Milky Way dwarf satellites of which six were around 99.9% dark matter (with a mass-to-light ratio of about 1,000).
Early demise of young dwarfs
The other popular proposed solution is that dwarf galaxies may tend to merge into the galaxies they orbit shortly after star-formation, or to be quickly torn apart and tidally stripped by larger galaxies, due to complicated orbital interactions.
Tidal stripping may also have been part of the problem of detecting dwarf galaxies in the first place: Finding dwarf galaxies is an extremely difficult task, since they tend to have low surface brightness and are highly diffuse – so much so that they are close to blending into background and foreground stars.
See also
Dark galaxy
Cold dark matter
Cuspy halo problem (also known as "the core/cusp problem")
List of unsolved problems in physics
Footnotes
References
External links
Dark matter
Galaxies
Large-scale structure of the cosmos
Unsolved problems in physics | Dwarf galaxy problem | Physics,Astronomy | 492 |
7,752,786 | https://en.wikipedia.org/wiki/APNPP | The APNPP, an acronym of "l’association des pays non producteurs de pétrole" (in English: the "Pan-African Non-Petroleum Producers Association"), is an association of 15 African nations that signed a treaty in July 2006.
Their stated aim is to work together to promote biofuel production and reduce the effects of high oil prices.
The APNPP, which was first proposed by Abdoulaye Wade, is being led by the Ministry of Energy and Mines of Senegal. , the acting head is Madické Niang.
Members (and HDI)
Benin (HDI: 0.525)
Burkina Faso (HDI: 0.449)
The Democratic Republic of the Congo (HDI: 0.479)
Gambia (HDI: 0.500)
Ghana (HDI: 0.632)
Guinea (HDI: 0.465)
Guinea-Bissau (HDI: 0.483)
Madagascar (HDI: 0.501)
Mali (HDI: 0.428)
Morocco (HDI: 0.683)
Niger (HDI: 0.400)
Senegal (HDI: 0.511)
Sierra Leone (HDI: 0.477)
Togo (HDI: 0.539)
Zambia (HDI: 0.565)
Sources:
References
External links
A closer look at Africa's 'Green OPEC', August 2, 2006
Africa Over A Barrel, The Washington Post, October 28, 2006
"Biofuels: Strategic Choices for Commodity Dependent Developing Countries", by Sonja Vermeulen and others from the International Institute for Economic Development for the Common Fund for Commodities, November 2007
International energy organizations
International organizations based in Africa
Organizations established in 2006
Intergovernmental organizations established by treaty | APNPP | Engineering | 373 |
465,156 | https://en.wikipedia.org/wiki/Dipole%20antenna | In radio and telecommunications a dipole antenna or doublet
is one of the two simplest and most widely-used types of antenna; the other is the monopole. The dipole is any one of a class of antennas producing a radiation pattern approximating that of an elementary electric dipole with a radiating structure supporting a line current so energized that the current has only one node at each far end. A dipole antenna commonly consists of two identical conductive elements
such as metal wires or rods. The driving current from the transmitter is applied, or for receiving antennas the output signal to the receiver is taken, between the two halves of the antenna. Each side of the feedline to the transmitter or receiver is connected to one of the conductors. This contrasts with a monopole antenna, which consists of a single rod or conductor with one side of the feedline connected to it, and the other side connected to some type of ground. A common example of a dipole is the rabbit ears television antenna found on broadcast television sets. All dipoles are electrically equivalent to two monopoles mounted end-to-end and fed with opposite phases, with the ground plane between them made virtual by the opposing monopole.
The dipole is the simplest type of antenna from a theoretical point of view. Most commonly it consists of two conductors of equal length oriented end-to-end with the feedline connected between them.
Dipoles are frequently used as resonant antennas. If the feedpoint of such an antenna is shorted, then it will be able to resonate at a particular frequency, just like a guitar string that is plucked. Using the antenna at around that frequency is advantageous in terms of feedpoint impedance (and thus standing wave ratio), so its length is determined by the intended wavelength (or frequency) of operation. The most commonly used is the center-fed half-wave dipole which is just under a half-wavelength long. The radiation pattern of the half-wave dipole is maximum perpendicular to the conductor, falling to zero in the axial direction, thus implementing an omnidirectional antenna if installed vertically, or (more commonly) a weakly directional antenna if horizontal.
Although they may be used as standalone low-gain antennas, dipoles are also employed as driven elements in more complex antenna designs such as the Yagi antenna and driven arrays. Dipole antennas (or such designs derived from them, including the monopole) are used to feed more elaborate directional antennas such as a horn antenna, parabolic reflector, or corner reflector. Engineers analyze vertical (or other monopole) antennas on the basis of dipole antennas of which they are one half.
History
German physicist Heinrich Hertz first demonstrated the existence of radio waves in 1887 using what we now know as a dipole antenna (with capacitative end-loading). On the other hand, Guglielmo Marconi empirically found that he could just ground the transmitter (or one side of a transmission line, if used) dispensing with one half of the antenna, thus realizing the vertical or monopole antenna.
For the low frequencies Marconi employed to achieve long-distance communications, this form was more practical; when radio moved to higher frequencies (especially VHF transmissions for FM radio and TV) it was advantageous for these much smaller antennas to be entirely atop a tower thus requiring a dipole antenna or one of its variations.
In the early days of radio, the thus-named Marconi antenna (monopole) and the doublet (dipole) were seen as distinct inventions. Now, however, the monopole antenna is understood as a special case of a dipole which has a virtual element underground.
Dipole variations
Short dipole
A short dipole is a dipole formed by two conductors with a total length substantially less than a half wavelength Short dipoles are sometimes used in applications where a full half-wave dipole would be too large. They can be analyzed easily using the results obtained below for the Hertzian dipole, a fictitious entity. Being shorter than a resonant antenna (half wavelength long) its feedpoint impedance includes a large capacitive reactance requiring a loading coil or other matching network in order to be practical, especially as a transmitting antenna.
To find the far-field electric and magnetic fields generated by a short dipole we use the result shown below for the Hertzian dipole (an infinitesimal current element) at a distance from the current and at an angle     to the direction of the current, as being:
where the radiator consists of a current of over a short length and in electronics replaces the customary mathematical symbol     for the square root of . is the angular (radian) frequency and is the wavenumber is the impedance of free space which is the ratio of a free space plane wave's electric to magnetic field strength.
The feedpoint is usually at the center of the dipole as shown in the diagram. The current along dipole arms are approximately described as proportional to where is the distance to the nearest end of the arm. In the case of a short dipole, that is essentially a linear drop from at the feedpoint to zero at the end. Therefore, this is comparable to a Hertzian dipole with an effective current h equal to the average current over the conductor, so With that substitution, the above equations closely approximate the fields generated by a short dipole fed by current
From the fields calculated above, one can find the radiated flux (power per unit area) at any point as the magnitude of the real part of the Poynting vector, , which is given by Because and are at right angles and in phase, there is no imaginary part and the cross product is equal to the phase factors (the exponentials) cancel out, leaving:
We have now expressed the flux in terms of the feedpoint current and the ratio of the short dipole's length to the wavelength of radiation . The radiation pattern given by is seen to be similar to and only slightly less directional than that of the half-wave dipole.
Using the above expression for the radiation in the far field for a given feedpoint current, we can integrate over all solid angle to obtain the total radiated power.
From that, it is possible to infer the radiation resistance, equal to the resistive (real) part of the feedpoint impedance, neglecting a component due to ohmic losses (presumed smaller). By setting to the power supplied at the feedpoint we find:
Again, these approximations become quite accurate for Setting despite its use not quite being valid for so large a fraction of the wavelength, the formula would predict a radiation resistance of 49 Ω, instead of the actual value of 73 Ω produced by a half-wave dipole, when more correct quarter-wave sinusoidal currents are used.
Dipole antennas of various lengths
The fundamental resonance of a thin linear conductor occurs at a frequency whose free-space wavelength is twice the wire's length; i.e. where the conductor is wavelength long. Dipole antennas are frequently used at around that frequency and thus termed half-wave dipole antennas. This important case is dealt with in the next section.
Thin linear conductors of length are in fact resonant at any integer multiple of a half-wavelength:
where is an integer, is the wavelength, and is the reduced speed of radio waves in the radiating conductor ( the speed of light). For a center-fed dipole, however, there is a great dissimilarity between being odd or being even. Dipoles which are an odd number of half-wavelengths in length have reasonably low driving point impedances (which are purely resistive at that resonant frequency). However ones which are an even number of half-wavelengths in length, that is, an integer number of wavelengths in length, have a high driving point impedance (albeit purely resistive at that resonant frequency).
For instance, a full-wave dipole antenna can be made with two half-wavelength conductors placed end to end for a total length of approximately This results in an additional gain over a half-wave dipole of about 2 dB. Full wave dipoles can be used in short wave broadcasting only by making the effective diameter very large and feeding from a high impedance balanced line. Cage dipoles are often used to get the large diameter.
A -wave dipole antenna has a much lower but not purely resistive feedpoint impedance, which requires a matching network to the impedance of the transmission line. Its gain is about 3 dB greater than a half-wave dipole, the highest gain of any dipole of any similar length.
{| class="wikitable" style="text-align:center;"
|+ Gain of dipole antennas
|-
! Length, inwavelengths
! Directivegain(dBi)
! Notes
|-
| ≪ 0.5
| 1.76
|style="text-align:left;"| Poor efficiency
|-
| 0.5
| 2.15
|style="text-align:left;"| Most common
|-
| 1.0
| 4.0
|style="text-align:left;"| Only with fat dipoles
|-
| 1.25
| 5.2
|style="text-align:left;"| Greatest gain
|-
| 1.5
| 3.5
|style="text-align:left;"| Third harmonic
|-
| 2.0
| 4.3
|style="text-align:left;"| Not used
|}
Other reasonable lengths of dipole do not offer advantages and are seldom used. However the overtone resonances of a half-wave dipole antenna at odd multiples of its fundamental frequency are sometimes exploited. For instance, amateur radio antennas designed as half-wave dipoles at 7 MHz can also be used as -wave dipoles at 21 MHz; likewise VHF television antennas resonant at the low VHF television band (centered around 65 MHz) are also resonant at the high VHF television band (around 195 MHz).
Half-wave dipole
A half-wave dipole antenna consists of two quarter-wavelength conductors placed end to end for a total length of approximately The current distribution is that of a standing wave, approximately sinusoidal along the length of the dipole, with a node at each end and an antinode (peak current) at the center (feedpoint):
where and runs from to .
In the far field, this produces a radiation pattern whose electric field is given by
The directional factor is very nearly the same as applying to the short dipole, resulting in a very similar radiation pattern as noted above.
A numerical integration of the radiated power over all solid angle, as we did for the short dipole, obtains a value for the total power radiated by the dipole with a current having a peak value of as in the form specified above. Dividing by supplies the flux at a large distance, averaged over all directions. Dividing the flux in the direction (where it is at its peak) at that large distance by the average flux, we find the directive gain to be 1.64 . This can also be directly computed using the cosine integral:
(2.15 dBi)
The form of the cosine integral is not the same as the form; they differ by a logarithm. Both MATLAB and Mathematica have inbuilt functions which compute , but not . See the Wikipedia page on cosine integral for the relationship between these functions.
We can now also find the radiation resistance as we did for the short dipole by solving:
to obtain:
Using the induced EMF method, the real part of the driving point impedance can also be written in terms of the cosine integral, obtaining the same result:
If a half-wave dipole is driven at a point other the center, then the feed point resistance will be higher. The radiation resistance is usually expressed relative to the maximum current present along an antenna element, which for the half-wave dipole (and most other antennas) is also the current at the feedpoint. However, if the dipole is fed at a different point at a distance from a current maximum (the center in the case of a half-wave dipole), then the current there is not but only
In order to supply the same power, the voltage at the feedpoint has to be similarly increased by the factor
Consequently, the resistive part of the feedpoint impedance is increased by the factor
This equation can also be used for dipole antennas of any length, provided that has been computed relative to the current maximum, which is not generally the same as the feedpoint current for dipoles longer than half-wave. Note that this equation breaks down when feeding an antenna near a current node, where approaches zero. The driving point impedance does indeed rise greatly, but is nevertheless limited due to higher order components of the elements' not-quite-exactly-sinusoidal current, which have been ignored above in the model for the current distribution.
Folded dipole
A folded dipole is a half-wave dipole with an additional parallel wire connecting its two ends. If the additional wire has the same diameter and cross-section as the dipole, two nearly identical radiating currents are generated. The resulting far-field emission pattern is nearly identical to the one for the single-wire dipole described above, but at resonance its feedpoint impedance is four times the radiation resistance of a single-wire dipole.
A folded dipole is, technically, a folded full-wave loop antenna, where the loop has been bent at opposing ends and squashed into two parallel wires in a flat line. Although the broad bandwidth, high feedpoint impedance, and high efficiency are characteristics more similar to a full loop antenna, the folded dipole's radiation pattern is more like an ordinary dipole. Since the operation of a single halfwave dipole is easier to understand, both full loops and folded dipoles are often described as two halfwave dipoles in parallel, connected at the ends.
The high feedpoint impedance at resonance is because for a fixed amount of power, the total radiating current is equal to twice the current in each wire separately and thus equal to twice the current at the feed point. We equate the average radiated power to the average power delivered at the feedpoint, we may write
where is the lower feedpoint impedance of the resonant halfwave dipole. It follows that
Half-wave folded dipoles are often used for FM radio antennas; versions made with twin lead which can be hung on an inside wall often come with FM tuners. They are also widely used as driven elements for rooftop Yagi television antennas. The T²FD antenna is a folded dipole with a resistor added on the second wire, opposite the feedpoint.
The folded dipole is therefore well matched to 300 Ω balanced transmission lines, such as twin-feed ribbon cable. The folded dipole has a wider bandwidth than a single dipole. They can be used for transforming the value of input impedance of the dipole over a broad range of step-up ratios by changing the thicknesses of the wire conductors for the fed- and folded-sides.
Instead of altering thickness or spacing, one can add a third parallel wire to increase the antenna impedance to 9 times that of a single-wire dipole, raising the impedance to 658 Ω, making a good match for open wire feed cable, and further broadening the resonant frequency band of the antenna. More extra parallel wires can be added: Any number of extra parallel wires can be joined onto the antenna, with the radiation resistance (and feedpoint impedance) given by
where is the number of parallel halfwave-long wires laid side-by-side in the antenna, and connected at their ends. It is also possible to modify the so-called flattened-loop design, and get nearly as good performance, by making each the parallel wires too short by the same amount, but connecting a single capacitive loading wire (going off in nearly any direction, most often dangling) on each of the antenna ends. The loading wire length is equal to the single missing length of one of the parallel wires.
Other variants
There are numerous modifications to the shape of a dipole antenna which are useful in one way or another but result in similar radiation characteristics (low gain). This is not to mention the many directional antennas which include one or more dipole elements in their design as driven elements, many of which are linked to in the information box at the bottom of this page.
The bow-tie antenna is a dipole with flaring, triangular shaped arms. The shape gives it a much wider bandwidth than an ordinary dipole. It is widely used in UHF television antennas.
The cage dipole is a similar modification in which the bandwidth is increased by using fat cylindrical dipole elements made of a cage of wires (see photo). These are used in a few broadband array antennas in the medium wave and shortwave bands for applications such as over-the-horizon radar and radio telescopes.
A halo antenna is a half-wave dipole bent into a circle for a nearly uniform radiation pattern in the plane of the circle. When the halo's circle is horizontal, it produces horizontally polarized radiation in a nearly omnidirectional pattern with only a little power wasted toward the zenith, compared to a straight horizontal dipole. In practice, it is categorized either as a bent dipole or as a loop antenna, depending on author preference.
A turnstile antenna comprises two dipoles crossed at a right angle and feed system which introduces a quarter-wave phase difference between the currents along the two. With that geometry, the two dipoles do not interact electrically but their fields add in the far-field producing a net radiation pattern that is rather close to isotropic, with horizontal polarization in the plane of the elements and circular or elliptical polarization at other angles. Turnstile antennas can be stacked and fed in phase to realize an omnidirectional broadside array or phased for an end-fire array with circular polarization.
The batwing antenna is a turnstile antenna with its linear elements widened as in a bow-tie antenna, again for the purpose of widening its resonant frequency and thus usable over a larger bandwidth, without re-tuning. When stacked to form an array the radiation is omnidirectional, horizontally polarized, and with increased gain at low elevations, making it ideal for television broadcasting.
A V antenna is a dipole with a bend in the middle so its arms are at an angle instead of co-linear.
A quadrant antenna is a 'V' antenna with an unusual overall length of a full wavelength, with two half-wave horizontal elements meeting at a right angle where it is fed. Quadrant antennas produce mostly horizontal polarization at low to intermediate elevation angles and have nearly omnidirectional radiation patterns.
One implementation uses cage elements (see above); the thickness of the resulting elements lowers the high driving point impedance of a full-wave dipole to a value that accommodates a reasonable match to open wire lines and increases the bandwidth (in terms of SWR) to a full octave. They are used for HF band transmissions.
The antenna is a dipole antenna fed indirectly, through a carefully chosen length of 300 Ω or 450 Ω twin lead, which acts as an impedance matching network to connect (through a balun) to a standard 50 Ω coaxial transmission line.
The sloper antenna is a slanted vertical dipole antenna attached to the top of a single tower. The element can be center-fed or can be end-fed as an unbalanced monopole antenna from a transmission line at the top of the tower, in which case the monopole's ground connection can better be viewed as a second element comprising the tower or transmission line shield.
The inverted 'V' antenna is likewise supported using a single tower but is a balanced antenna with two symmetric elements angled toward the ground. It is thus a half-wave dipole with a bend in the middle. Like the sloper, this has the practical advantage of elevating the antenna but requiring only a single tower.
The AS-2259 antenna is an inverted-‘V’ dipole antenna used for local communications via Near Vertical Incidence Skywave (NVIS).
Vertical (monopole) antennas
The vertical, Marconi, or monopole antenna is a single-element antenna usually fed at the bottom (with the shield side of its unbalanced transmission line connected to ground). It behaves essentially the same as half of a dipole antenna. The ground (or ground plane) is considered to be a conductive surface that works as a reflector (see effect of ground). Vertical currents in the reflected image have the same direction (thus are not reflected about the ground) and phase as the current in the real antenna. The conductor and its image together act as a dipole in the upper half of space. Like a dipole, in order to achieve resonance (resistive feedpoint impedance) the conductor must be close to a quarter wavelength in height (like each conductor in a half-wave dipole).
In this upper side of space, the emitted field has the same amplitude of the field radiated by a similar dipole fed with the same current. Therefore, the total emitted power is half the emitted power of a dipole fed with the same current. As the current is the same, the radiation resistance (real part of series impedance) will be half of the series impedance of the comparable dipole. A quarter-wave monopole, then, has an impedance of Another way of seeing this, is that a true dipole receiving a current has voltages on its terminals of and , for an impedance across the terminals of , whereas the comparable vertical antenna has the current but an applied voltage of only .
Since the fields above ground are the same as for the dipole, but only half the power is applied, the gain is doubled to This is not an actual performance advantage per se, since in practice a dipole also reflects half of its power off the ground which (depending on the antenna height and sky angle) can augment (or cancel!) the direct signal. The vertical polarization of the monopole (as for a vertically oriented dipole) is advantageous at low elevation angles where the ground reflection combines with the direct wave approximately in phase.
The earth acts as a ground plane, but it can be a poor conductor leading to losses. Its conductivity can be improved (at cost) by laying a copper mesh. When an actual ground is not available (such as in a vehicle) other metallic surfaces can serve as a ground plane (typically the vehicle's roof). Alternatively, radial wires placed at the base of the antenna can form a ground plane. For VHF and UHF bands, the radiating and ground plane elements can be constructed from rigid rods or tubes. Using such an artificial ground plane allows for the entire antenna and ground to be mounted at an arbitrary height. One common modification has the radials forming the ground plane sloped down, which has the effect of raising the feedpoint impedance to around 50 Ω, matching common coaxial cable. No longer being a true ground, a balun (such as a simple choke balun) is then recommended.
Dipole characteristics
Impedance of dipoles of various lengths
The feedpoint impedance of a dipole antenna is sensitive to its electrical length and feedpoint position. Therefore, a dipole will generally only perform optimally over a rather narrow bandwidth, beyond which its impedance will become a poor match for the transmitter or receiver (and transmission line). The real (resistive) and imaginary (reactive) components of that impedance, as a function of electrical length, are shown in the accompanying graph. The detailed calculation of these numbers are described below. Note that the value of the reactance is highly dependent on the diameter of the conductors; this plot is for conductors with a diameter of 0.001 wavelengths.
Dipoles that are much smaller than one half the wavelength of the signal are called short dipoles. These have a very low radiation resistance (and a high capacitive reactance) making them inefficient antennas. More of a transmitter's current is dissipated as heat due to the finite resistance of the conductors which is greater than the radiation resistance. However they can nevertheless be practical receiving antennas for longer wavelengths.
Dipoles whose length is approximately half the wavelength of the signal are called half-wave dipoles and are widely used as such or as the basis for derivative antenna designs. These have a radiation resistance which is much greater, closer to the characteristic impedances of available transmission lines, and normally much larger than the resistance of the conductors, so that their efficiency approaches 100%. In general radio engineering, the term dipole, if not further qualified, is taken to mean a center-fed half-wave dipole.
A true half-wave dipole is one half of the wavelength in length, where in free space. Such a dipole has a feedpoint impedance consisting of 73 Ω resistance and +43 Ω reactance, thus presenting a slightly inductive reactance. To cancel that reactance, and present a pure resistance to the feedline, the element is shortened by the factor for a net length of:
where is the free-space wavelength, is the speed of light in free space, and is the frequency. The adjustment factor which causes feedpoint reactance to be eliminated, depends on the diameter of the conductor,
as is plotted in the accompanying graph. The relative scale-size ranges from about 0.98 for thin wires (diameter, 0.00001 wave) to about 0.94 for thick conductors (diameter, 0.008 wave). This is because the effect of antenna length on reactance (upper graph) is much greater for thinner conductors so that a smaller deviation from the exact half wavelength is required in order to cancel the 43 Ω inductive reactance it has when exactly For the same reason, antennas with thicker conductors have a wider operating bandwidth over which they attain a practical standing wave ratio which is degraded by any remaining reactance.
For a typical of about 0.95, the above formula for the corrected antenna length can be written, for a length in meters as , or a length in feet as where is the frequency in megahertz.
Dipole antennas of lengths approximately equal to any odd multiple of are also resonant, presenting little or no reactance (which can be removed by making a small length adjustment). However, these are rarely used. One size that is a much more efficient radiator both in terms of Watts out and in direction radiated is a dipole with a length of Not being close to this antenna's impedance has a large (negative) reactance and can only be used with an inductive impedance matching network (a tapped loading coil or a so-called antenna tuner). It is a desirable length because such an antenna has the highest gain for any dipole which isn't a great deal longer.
Radiation pattern and gain
A dipole is omnidirectional in the plane perpendicular to the wire axis, with the radiation falling to zero on the axis (off the ends of the antenna). In a half-wave dipole, the radiation is maximum perpendicular to the antenna, declining as to zero on the axis. Its radiation pattern in three dimensions (see figure) would be plotted approximately as a toroid (doughnut shape) symmetric about the conductor. When mounted vertically this results in maximum radiation in horizontal directions. When mounted horizontally, the radiation peaks at right angles (90°) to the conductor, with nulls in the direction of the dipole.
Neglecting electrical inefficiency, the antenna gain is equal to the directive gain, which is 1.50 (1.76 dBi or -0.39 dBd) for a short dipole, increasing to 1.64 (2.15 dBi or 0 dBd) for a half-wave dipole. For a dipole the gain further increases to about 5.2 dBi, making this length desirable for that reason even though the antenna is then off-resonance. Longer dipoles than that have radiation patterns that are multi-lobed, with poorer gain (unless they are much longer) even along the strongest lobe. Other enhancements to the dipole (such as including a corner reflector or an array of dipoles) can be considered when more substantial directivity is desired. Such antenna designs, although based on the half-wave dipole, generally acquire their own names.
Feeding a dipole antenna
Ideally, a half-wave dipole should be fed using a balanced transmission line matching its typical 65–70 Ω input impedance. Twin lead with a similar impedance is available but seldom used and does not match the balanced antenna terminals of most radio and television receivers. Much more common is the use of common 300 Ω twin lead in conjunction with a folded dipole. The driving point impedance of a half-wave folded dipole is 4 times that of a simple half-wave dipole, thus closely matching that 300 Ω characteristic impedance. Most FM broadcast band tuners and older analog televisions include balanced 300 Ω antenna input terminals. However twin lead has the drawback that it is electrically disturbed by any other nearby conductor (including earth); when used for transmitting, care must be taken not to place it near other conductors.
Many types of coaxial cable (or coax) have a characteristic impedance of 75 Ω, which would otherwise be a good match for a half-wave dipole. However, coax is a single-ended line whereas a center-fed dipole expects a balanced line (such as twin lead). By symmetry, one can see that the dipole's terminals have an equal but opposite voltage, whereas coax has one conductor grounded. Using coax regardless results in an unbalanced line, in which the currents along the two conductors of the transmission line are no longer equal and opposite. Since you then have a net current along the transmission line, the transmission line becomes an antenna itself, with unpredictable results (since it depends on the path of the transmission line). This will generally alter the antenna's intended radiation pattern, and change the impedance seen at the transmitter or receiver.
A balun is required to use coaxial cable with a dipole antenna. The balun transfers power between the single-ended coax and the balanced antenna, sometimes with an additional change in impedance. A balun can be implemented as a transformer which also allows for an impedance transformation. This is usually wound on a ferrite toroidal core. The toroid core material must be suitable for the frequency of use, and in a transmitting antenna it must be of sufficient size to avoid saturation. Other balun designs are mentioned below.
Current balun
A current balun uses a transformer wound on a toroid or rod of magnetic material such as ferrite. All of the current seen at the input goes into one terminal of the balanced antenna. It forms a balun by choking common-mode current. The material isn't critical for 1:1 because there is no transformer action applied to the desired differential current.
A related design involves two transformers and includes a 1:4 impedance transformation.
Coax balun
A coax balun is a cost-effective method of eliminating feeder radiation but is limited to a narrow set of operating frequencies.
One easy way to make a balun is to use a length of coaxial cable equal to half a wavelength. The inner core of the cable is linked at each end to one of the balanced connections for a feeder or dipole. One of these terminals should be connected to the inner core of the coaxial feeder. All three braids should be connected together. This then forms a 4:1 balun, which works correctly at only a narrow band of frequencies.
Sleeve balun
At VHF frequencies, a sleeve balun can also be built to remove feeder radiation.
Another narrow-band design is to use a length of metal pipe. The coaxial cable is placed inside the pipe; at one end the braid is wired to the pipe while at the other end no connection is made to the pipe. The balanced end of this balun is at the end where no connection is made to the pipe. The conductor acts as a transformer, converting the zero impedance at the short to the braid into an infinite impedance at the open end. This infinite impedance at the open end of the pipe prevents current flowing into the outer coax formed by the outside of the inner coax shield and the pipe, forcing the current to remain in the inside coax. This balun design is impractical for low frequencies because of the long length of pipe that will be needed.
Common applications
"Rabbit ears" TV antenna
One of the most common applications of the dipole antenna is the rabbit ears or bunny ears television antenna, found atop broadcast television receivers. It is used to receive the VHF terrestrial television bands, consisting in the US of 54–88 MHz (band I) and 174–216 MHz (band III), with wavelengths of 5.5–1.4 m. Since this frequency range is much wider than a single fixed dipole antenna can cover, it is made with several degrees of adjustment. It is constructed of two telescoping rods that can each be extended out to about 1 m length (one-quarter wavelength at 75 MHz). With control over the segments' length, angle with respect to vertical, and compass angle, one has much more flexibility in optimizing reception than available with a rooftop antenna even if equipped with an antenna rotor.
FM-broadcast-receiving antennas
In contrast to the wide television frequency bands, the FM broadcast band (88-108 MHz) is narrow enough that a dipole antenna can cover it. For fixed use in homes, hi-fi tuners are typically supplied with simple folded dipoles resonant near the center of that band. The feedpoint impedance of a folded dipole, which is quadruple the impedance of a simple dipole, is a good match for 300 Ω twin lead, so that is usually used for the transmission line to the tuner. A common construction is to make the arms of the folded dipole out of twin lead also, shorted at their ends. This flexible antenna can be conveniently taped or nailed to walls, following the contours of moldings.
Shortwave antenna
Horizontal wire dipole antennas are popular for use on the HF shortwave bands, both for transmitting and shortwave listening. They are usually constructed of two lengths of wire joined by a strain insulator in the center, which is the feedpoint. The ends can be attached to existing buildings, structures, or trees, taking advantage of their heights. If used for transmitting, it is essential that the ends of the antenna be attached to supports through strain insulators with a sufficiently high flashover voltage, since the antenna's high-voltage antinodes occur there. Being a balanced antenna, they are best fed with a balun between the (coax) transmission line and the feedpoint.
These are simple to put up for temporary or field use. But they are also widely used by radio amateurs and short wave listeners in fixed locations due to their simple (and inexpensive) construction, while still realizing a resonant antenna at frequencies where resonant antenna elements need to be of quite some size. They are an attractive solution for these frequencies, when significant directionality is not desired, and the cost of several such resonant antennas for different frequency bands, built at home, may still be much less than a single commercially produced antenna.
Dipole towers
Antennas for MF and LF radio stations are usually constructed as mast radiators, in which the vertical mast itself forms the antenna. Although mast radiators are most commonly monopoles, some are dipoles. The metal structure of the mast is divided at its midpoint into two insulated sections to make a vertical dipole, which is driven at the midpoint.
Dipole arrays
Many types of array antennas are constructed using multiple dipoles, usually half-wave dipoles. The purpose of using multiple dipoles is to increase the directional gain of the antenna over the gain of a single dipole; the radiation of the separate dipoles interferes to enhance power radiated in desired directions. In arrays with multiple dipole driven elements, the feedline is split using an electrical network in order to provide power to the elements, with careful attention paid to the relative phase delays due to transmission between the common point and each element.
In order to increase antenna gain in horizontal directions (at the expense of radiation towards the sky or towards the ground) one can stack antennas in the vertical direction in a broadside array where the antennas are fed in phase. Doing so with horizontal dipole antennas retains those dipoles' directionality and null in the direction of their elements. However, if each dipole is vertically oriented, in a so-called collinear antenna array (see graphic), that null direction becomes vertical and the array acquires an omnidirectional radiation pattern (in the horizontal plane) as is typically desired. Vertical collinear arrays are used in the VHF and UHF frequency bands at which wavelengths the size of the elements are small enough to practically stack several on a mast. They are a higher-gain alternative to quarter-wave ground plane antennas used in fixed base stations for mobile two-way radios, such as police, fire, and taxi dispatchers.
On the other hand, for a rotating antenna (or one used only towards a particular direction) one may desire increased gain and directivity in a particular horizontal direction. If the broadside array discussed above (whether collinear or not) is turned horizontal, then the one obtains a greater gain in the horizontal direction perpendicular to the antennas, at the expense of most other directions. Unfortunately, that also means that the direction opposite the desired direction also has a high gain, whereas high gain is usually desired in one single direction. The power that is wasted in the reverse direction, however, can be redirected, for instance by using a large planar reflector, as is accomplished in the reflective array antenna, increasing the gain in the desired direction by another 3 dB
An alternative realization of a uni-directional antenna is the end-fire array. In this case the dipoles are again side by side (but not collinear), but fed in progressing phases, arranged so that their waves add coherently in one direction but cancel in the opposite direction. So now, rather than being perpendicular to the array direction as in a broadside array, the directivity is in the array direction (i.e. the direction of the line connecting their feedpoints) but with one of the opposite directions suppressed.
Yagi antennas
The above-described antennas with multiple driven elements require a complex feed system of signal splitting, phasing, distribution to the elements, and impedance matching. A different sort of end-fire array which is much more often used is based on the use of so-called parasitic elements. In the popular high-gain Yagi antenna, only one of the dipoles is actually connected electrically, but the others receive and reradiate power supplied by the driven element. This time, the phasing is accomplished by careful choice of the lengths as well as positions of the parasitic elements, in order to concentrate gain in one direction and largely cancel radiation in the opposite direction (as well as all other directions). Although the realized gain is less than a driven array with the same number of elements, the simplicity of the electrical connections makes the Yagi more practical for consumer applications.
Dipole as a reference standard
Antenna gain is frequently measured as decibels relative to a half-wave dipole. One reason is that practical antenna measurements need a reference strength to compare the field strength of an antenna under test at a particular distance to. While there is no such thing as an isotropic radiator, the half-wave dipole is well understood and behaved, and can be constructed to be nearly 100% efficient. It is also a fairer comparison, since the gain obtained by the dipole itself is essentially "free," given that almost no antenna design has a smaller directive gain.
For a gain measured relative to a dipole, one says the antenna has a gain of (see Decibel). More often, gains are expressed relative to an isotropic radiator, making the gain seem higher. In consideration of the known gain of a half-wave dipole, 0 dBd is defined as 2.15 dBi; all gains in "dBi" are shifted 2.15 higher than gains in "dBd".
Hertzian dipole
The Hertzian dipole or elementary doublet refers to a theoretical construction, rather than a physical antenna design: It is an idealized tiny segment of conductor carrying a RF current with constant amplitude and direction along its entire (short) length; a real antenna can be modeled as the combination of many Hertzian dipoles laid end-to-end.
The Hertzian dipole may be defined as a finite oscillating current (in a specified direction) of over a tiny or infinitesimal length at a specified position. The solution of the fields from a Hertzian dipole can be used as the basis for analytical or numerical calculation of the radiation from more complex antenna geometries (such as practical dipoles) by forming the superposition of fields from a large number of Hertzian dipoles comprising the current pattern of the actual antenna. As a function of position, taking the elementary current elements multiplied by infinitesimal lengths the resulting field pattern then reduces to an integral over the path of an antenna conductor (modeled as a thin wire).
For the following derivation, we shall take the current to be in the direction, centered at the origin where with the sinusoidal time dependence for all quantities being understood. The simplest approach is to use the calculation of the vector potential using the formula for the retarded potential. Although the value of is not unique, we shall constrain it by adopting the Lorenz gauge, and assuming sinusoidal current at radian frequency the retardation of the field is converted just into a phase factor where the wave number in free space and is the linear distance between the point being considered to the origin (where we assumed the current source to be), so This results
in a vector potential at position due to that current element only, which we find is purely in the direction (the direction of the current):
where is the permeability of free space. Then using
we can solve for the magnetic field and from that (dependent on us having chosen the Lorenz gauge) the electric field using
In spherical coordinates we find that the magnetic field has only a component in the direction:
where
while the electric field has components both in the and directions:
where
with is the impedance of free space.
This solution includes near field terms that are very strong near the source but which are not radiated. As seen in the accompanying animation, the and fields very close to the source are almost 90° out of phase, thus contributing very little to the Poynting vector by which radiated flux is computed. The near field solution for an antenna element (from the integral using this formula over the length of that element) is the field that can be used to compute the mutual impedance between it and another nearby element.
For computation of the far field radiation pattern, the above equations are simplified as only the terms remain significant:
The far-field pattern is thus seen to consist of a transverse electromagnetic (TEM) wave, with electric and magnetic fields at right angles to each other and at right angles to the direction of propagation (the direction of , as we assumed the source to be at the origin). The electric polarization, in the direction, is coplanar with the source current (in the direction), while the magnetic field is at right angles to that, in the direction. It can be seen from these equations, and also in the animation, that the fields at these distances are exactly in phase. Both fields fall according to with the power thus falling according to as dictated by the inverse square law.
Radiation resistance
If one knows the far field radiation pattern due to a given antenna current, then it is possible to compute the radiation resistance directly. For the above fields due to the Hertzian dipole, we can compute the power flux according to the Poynting vector, resulting in a power (as averaged over one cycle) of:
With increasing the becomes insignificantly small compared to the component. Although not required, it is easiest to only work with the asymptotic value that approaches at a large using the simpler far-field expressions for and Consider a large sphere surrounding the source with a radius We find the power per unit area crossing the surface of that sphere in the direction is:
Integration of this flux over the complete sphere results in:
where is the free space wavelength corresponding to the radian frequency By definition, the radiation resistance times the average of the square of the current is the net power radiated due to that current, so equating the above to we find:
This method can be used to compute the radiation resistance for any antenna whose far-field radiation pattern has been found in terms of a specific antenna current. If ohmic losses in the conductors are neglected, the radiation resistance (considered relative to the feedpoint) is identical to the resistive (real) component of the feedpoint impedance. Unfortunately, this exercise tells us nothing about the reactive (imaginary) component of feedpoint impedance, whose calculation is considered below.
Directive gain
Using the above expression for the radiated flux given by the Poynting vector, it is also possible to compute the directive gain of the Hertzian dipole. Dividing the total power computed above by we can find the flux averaged over all directions as
Dividing the flux radiated in a particular direction by we obtain the directive gain
The commonly quoted antenna "gain", meaning the peak value of the gain pattern (radiation pattern), is found to be 1.5~1.76 dBi, lower than practically any other antenna configuration.
Comparison with the short dipole
The Hertzian dipole is similar to but differs from the short dipole, discussed above. In both cases the conductor is very short compared to a wavelength, so the standing wave pattern present on a half-wave dipole (for instance) is absent. However, with the Hertzian dipole we specified that the current along that conductor is constant over its short length. This makes the Hertzian dipole useful for analysis of more complex antenna configurations, where every infinitesimal section of that real antenna's conductor can be modeled as a Hertzian dipole with the current found to be flowing in that real antenna.
However a short conductor fed with a RF voltage will not have a uniform current even along that short range. Rather, a short dipole in real life has a current equal to the feedpoint current at the feedpoint but falling linearly to zero over the length of that short conductor. By placing a capacitive hat, such as a metallic ball, at the end of the conductor, it is possible for its self capacitance to absorb the current from the conductor and better approximate the constant current assumed for the Hertzian dipole. But again, the Hertzian dipole is meant only as a theoretical construct for antenna analysis.
The short dipole, with a feedpoint current of has an average current over each conductor of only The above field equations for the Hertzian dipole of length would then predict the actual fields for a short dipole using that effective current This would result in a power measured in the far field of one quarter that given by the above equation for the magnitude of the Poynting vector if we had assumed an element current of Consequently, it can be seen that the radiation resistance computed for the short dipole is one quarter of that computed above for the Hertzian dipole. But their radiation patterns (and gains) are otherwise identical.
Detailed calculation of dipole feedpoint impedance
The impedance seen at the feedpoint of a dipole of various lengths has been plotted above, in terms of the real (resistive) component dipole and the imaginary (reactive) component dipole of that impedance. For the case of an antenna with perfect conductors (no Ohmic loss), dipole is identical to the radiation resistance, which can more easily be computed from the total power in the far-field radiation pattern for a given applied current as we showed for the short dipole. The calculation of dipole is more difficult.
Induced EMF method
Using the induced EMF method closed form expressions are obtained for both components of the feedpoint impedance; such results are plotted above. The solution depends on an assumption for the form of the current distribution along the antenna conductors. For wavelength-to-element diameter ratios greater than about 60, the current distribution along each antenna element of length is very well approximated as having the form of the sine function at points along the antenna , with the current reaching zero at the elements' ends, where as follows:
where is the wavenumber given by and the amplitude is set to match a specified driving point current at
In cases where an approximately sinusoidal current distribution can be assumed, this method solves for the driving point impedance in closed form using the cosine and sine integral functions and . For a dipole of total length , the resistive and reactive components of the driving point impedance can be expressed as:
where is the radius of the conductors, is again the wavenumber as defined above, is the impedance of empty space, which very nearly the same as impedance of air: and is Euler's constant. There is an equivalent alternate form favored by some authors that uses a different function, .
Integral methods
The induced EMF method is dependent on the assumption of a sinusoidal current distribution, delivering an accuracy better than about 10% as long as the wavelength-to-element diameter ratio is greater than about 60. However, for yet larger conductors numerical solutions are required which solve for the conductor's current distribution (rather than assuming a sinusoidal pattern). This can be based on approximating solutions for either Pocklington's integrodifferential equation or the Hallén integral equation. These approaches also have greater generality, not being limited to linear conductors.
Numerical solution of either is performed using the moment method solution which requires expansion of that current into a set of basis functions; one simple (but not the best) choice, for instance, is to break up the conductor into segments with a constant current assumed along each. After setting an appropriate weighting function the cost may be minimized through the inversion of a matrix. Determination of each matrix element requires at least one double integration involving the weighting functions, which may become computationally intensive. These are simplified if the weighting functions are simply delta functions, which corresponds to fitting the boundary conditions for the current along the conductor at only discrete points. Then the matrix must be inverted, which is also computationally intensive as increases. In one simple example, performs this computation to find the antenna impedance with different using Pocklington's method, and finds that with the solutions approach their limiting values to within a few percent.
See also
AM broadcasting
Amateur radio
Balun
Coaxial antenna
Dipole field strength in free space
Driven element
Electronic symbol
FM broadcasting
Isotropic antenna
Omnidirectional antenna
Shortwave listening
T-antenna
Whip antenna
Notes
References
Sources for elementary, short, and half-wave dipoles
— Wire antenna web links including off center fed dipole (OCFD), dipole calculators and construction sites
External links
— simple, complete tutorial
— index to a series of articles about the dipole in its various forms
Antennas (radio)
Heinrich Hertz
Radio frequency antenna types
Radio technology
Articles containing video clips | Dipole antenna | Technology,Engineering | 10,759 |
14,273,036 | https://en.wikipedia.org/wiki/Lobster%20clasp | A lobster clasp, also known as a lobster hook, lobster claw, trigger clasp, or bocklebee clasp, is a fastener that is held closed by a spring. The lobster clasp is opened or closed by actuating a small lever, after which it is attached to (or removed from) a short link-chain or a ring-like structure. Lobster clasps are often used for necklaces, bracelets, and keychains.
Lobster clasps are named as such because of their "pinching" mechanism, and they are often shaped like a lobster's claw.
See also
Bolt snap
Carabiner
Jewellery components
Fasteners
References | Lobster clasp | Technology,Engineering | 133 |
5,066,836 | https://en.wikipedia.org/wiki/Coordinated%20flight | In aviation, coordinated flight of an aircraft is flight without sideslip.
When an aircraft is flying with zero sideslip a turn and bank indicator installed on the aircraft's instrument panel usually shows the ball in the center of the spirit level. The occupants perceive no lateral acceleration of the aircraft and their weight to be acting straight downward into their seats.
Particular care to maintain coordinated flight is required by the pilot when entering and leaving turns.
Advantages
Coordinated flight is usually preferred over uncoordinated flight for the following reasons:
it is more comfortable for the occupants
it minimises the drag force on the aircraft
it causes fuel to be drawn equally from tanks in both wings
it minimises the risk of entering a spin
Instrumentation
Airplanes and helicopters are usually equipped with a turn and bank indicator to provide their pilots with a continuous display of the lateral balance of their aircraft so the pilots can ensure coordinated flight.
Glider pilots attach a piece of coloured string to the outside of the canopy to sense the sideslip angle and assist in maintaining coordinated flight.
Axes of rotation
An airplane has three axes of rotation:
Pitch – in which the nose of the airplane moves up or down. This is typically controlled by the elevator at the rear of the airplane.
Yaw – in which the nose of the airplane moves left or right. This is typically controlled by the rudder at the rear of the airplane.
Roll (bank) – in which one wing of the airplane moves up and the other moves down. This is typically controlled by ailerons on the wings of the airplane.
Coordinated flight requires the pilot to use pitch, roll and yaw control simultaneously. See also flight dynamics.
Coordinating the turn
If the pilot were to use only the rudder to initiate a turn in the air, the airplane would tend to "skid" to the outside of the turn.
If the pilot were to use only the ailerons to initiate a turn in the air, the airplane would tend to "slip" toward the lower wing.
If the pilot were to fail to use the elevator to increase the angle of attack throughout the turn, the airplane would also tend to "slip" toward the lower wing.
However, if the pilot makes appropriate use of the rudder, ailerons and elevator to enter and leave the turn such that sideslip and lateral acceleration are zero the airplane will be in coordinated flight.
See also
Adverse yaw
References
Clancy, L.J. (1975), Aerodynamics, Pitman Publishing Limited, London
Coordinated flight Retrieved on 2008-09-19
Aerodynamics | Coordinated flight | Chemistry,Engineering | 509 |
40,467,176 | https://en.wikipedia.org/wiki/Optimus%20UI | Optimus UI is a front-end touch interface developed by LG Electronics with partners, featuring a full touch user interface. It is sometimes incorrectly identified as an operating system. Optimus UI is used internally by LG for sophisticated feature phones and tablet computers, and is not available for licensing by external parties.
The latest version of Optimus UI, 4.1.2, has been released on the Optimus K II and the Optimus Neo 3. It features a more refined user interface as compared to the previous version, 4.1.1, which would include voice shutter and quick memo.
Optimus UI is used in devices based on Android.
Phones running LG Optimus UI
Android
Smartphones/Phablets
LG GT540 Optimus
LG Optimus One
LG Optimus 2X
LG Optimus 4X HD
LG Optimus 3D
LG Optimus 3D Max
LG Optimus Slider
LG Optimus LTE
LG Optimus LTE 2
LG Optimus Vu
LG Optimus Vu II
LG Optimus Black
LG Optimus Chat
LG Optimus Chic
LG Optimus Net
LG Optimus Sol
LG Optimus HUB (E510)
LG Optimus L1 II
LG Optimus L3
LG Optimus L3 II
LG Optimus L5
LG Optimus L5 II
LG Optimus L7
LG Optimus L7 II
LG Optimus L9
LG Optimus L9 II
LG Optimus L90
LG Optimus F3
LG Optimus F3Q
LG Optimus F5
LG Optimus F6
LG Optimus F7
LG Optimus G
LG Optimus G Pro
LG G2
LG G Pro 2
LG Vu 3
LG G Pro Lite
LG G Flex
LG L40 Dual
LG L65 Dual
LG L70 Dual
LG L80 Dual
LG L90 Dual
LG G3
LG G3S
LG Spectrum 2
LG G2 Mini
LG G Flex 2
Tablets
LG Optimus Pad
LG Optimus Pad LTE
LG G Pad 7.0
LG G Pad 8.3
References
Mobile operating systems
LG Electronics
Android (operating system) software | Optimus UI | Technology | 468 |
773,292 | https://en.wikipedia.org/wiki/Measurement%20problem | In quantum mechanics, the measurement problem is the problem of definite outcomes: quantum systems have superpositions but quantum measurements only give one definite result.
The wave function in quantum mechanics evolves deterministically according to the Schrödinger equation as a linear superposition of different states. However, actual measurements always find the physical system in a definite state. Any future evolution of the wave function is based on the state the system was discovered to be in when the measurement was made, meaning that the measurement "did something" to the system that is not obviously a consequence of Schrödinger evolution. The measurement problem is describing what that "something" is, how a superposition of many possible values becomes a single measured value.
To express matters differently (paraphrasing Steven Weinberg), the Schrödinger equation determines the wave function at any later time. If observers and their measuring apparatus are themselves described by a deterministic wave function, why can we not predict precise results for measurements, but only probabilities? As a general question: How can one establish a correspondence between quantum reality and classical reality?
Schrödinger's cat
A thought experiment called Schrödinger's cat illustrates the measurement problem. A mechanism is arranged to kill a cat if a quantum event, such as the decay of a radioactive atom, occurs.
The mechanism and the cat are enclosed in a chamber so the fate of the cat is unknown until the chamber is opened. Prior to observation, according to quantum mechanics, the atom is in a quantum superposition, a linear combination of decayed and intact states. Also according to quantum mechanics, the atom-mechanism-cat composite system is described by superpositions of compound states. Therefore, the cat would be described as in a superposition, a linear combination of two states an "intact atom-alive cat" and a "decayed atom-dead cat". However, when the chamber is opened the cat is either alive or it is dead: there is no superposition observed. After the measurement the cat is definitively alive or dead.
The cat scenario illustrates the measurement problem: how can an indefinite superposition yield a single definite outcome? It also illustrates other issues in quantum measurement, including when does a measurement occur? Was it when the cat was observed? How is a measurement apparatus defined? The mechanism for detecting radioactive decay? The cat? The chamber? What is the role of the observer?
Interpretations
The views often grouped together as the Copenhagen interpretation are the oldest and, collectively, probably still the most widely held attitude about quantum mechanics. N. David Mermin coined the phrase "Shut up and calculate!" to summarize Copenhagen-type views, a saying often misattributed to Richard Feynman and which Mermin later found insufficiently nuanced.
Generally, views in the Copenhagen tradition posit something in the act of observation which results in the collapse of the wave function. This concept, though often attributed to Niels Bohr, was due to Werner Heisenberg, whose later writings obscured many disagreements he and Bohr had during their collaboration and that the two never resolved. In these schools of thought, wave functions may be regarded as statistical information about a quantum system, and wave function collapse is the updating of that information in response to new data. Exactly how to understand this process remains a topic of dispute.
Bohr discussed his views in a 1947 letter to Pauli. Bohr points out that the measurement processes such as cloud chambers or photographic plates involve enormous amplification requiring energies far in excess of the quantum effects being studied and he notes that these processes are irreversible. He considered a consistent account of this issue to be an unsolved problem.
Hugh Everett's many-worlds interpretation attempts to solve the problem by suggesting that there is only one wave function, the superposition of the entire universe, and it never collapses—so there is no measurement problem. Instead, the act of measurement is simply an interaction between quantum entities, e.g. observer, measuring instrument, electron/positron etc., which entangle to form a single larger entity, for instance living cat/happy scientist. Everett also attempted to demonstrate how the probabilistic nature of quantum mechanics would appear in measurements, a work later extended by Bryce DeWitt. However, proponents of the Everettian program have not yet reached a consensus regarding the correct way to justify the use of the Born rule to calculate probabilities.
The de Broglie–Bohm theory tries to solve the measurement problem very differently: the information describing the system contains not only the wave function, but also supplementary data (a trajectory) giving the position of the particle(s). The role of the wave function is to generate the velocity field for the particles. These velocities are such that the probability distribution for the particle remains consistent with the predictions of the orthodox quantum mechanics. According to the de Broglie–Bohm theory, interaction with the environment during a measurement procedure separates the wave packets in configuration space, which is where apparent wave function collapse comes from, even though there is no actual collapse.
A fourth approach is given by objective-collapse models. In such models, the Schrödinger equation is modified and obtains nonlinear terms. These nonlinear modifications are of stochastic nature and lead to behaviour that for microscopic quantum objects, e.g. electrons or atoms, is unmeasurably close to that given by the usual Schrödinger equation. For macroscopic objects, however, the nonlinear modification becomes important and induces the collapse of the wave function. Objective-collapse models are effective theories. The stochastic modification is thought to stem from some external non-quantum field, but the nature of this field is unknown. One possible candidate is the gravitational interaction as in the models of Diósi and Penrose. The main difference of objective-collapse models compared to the other approaches is that they make falsifiable predictions that differ from standard quantum mechanics. Experiments are already getting close to the parameter regime where these predictions can be tested.
The Ghirardi–Rimini–Weber (GRW) theory proposes that wave function collapse happens spontaneously as part of the dynamics. Particles have a non-zero probability of undergoing a "hit", or spontaneous collapse of the wave function, on the order of once every hundred million years. Though collapse is extremely rare, the sheer number of particles in a measurement system means that the probability of a collapse occurring somewhere in the system is high. Since the entire measurement system is entangled (by quantum entanglement), the collapse of a single particle initiates the collapse of the entire measurement apparatus. Because the GRW theory makes different predictions from orthodox quantum mechanics in some conditions, it is not an interpretation of quantum mechanics in a strict sense.
Role of decoherence
Erich Joos and Heinz-Dieter Zeh claim that the phenomenon of quantum decoherence, which was put on firm ground in the 1980s, resolves the problem. The idea is that the environment causes the classical appearance of macroscopic objects. Zeh further claims that decoherence makes it possible to identify the fuzzy boundary between the quantum microworld and the world where the classical intuition is applicable. Quantum decoherence becomes an important part of some modern updates of the Copenhagen interpretation based on consistent histories. Quantum decoherence does not describe the actual collapse of the wave function, but it explains the conversion of the quantum probabilities (that exhibit interference effects) to the ordinary classical probabilities. See, for example, Zurek, Zeh and Schlosshauer.
The present situation is slowly clarifying, described in a 2006 article by Schlosshauer as follows:
See also
For a more technical treatment of the mathematics involved in the topic, see Measurement in quantum mechanics.
Absolute time and space
Constructor theory
Einstein's thought experiments
EPR paradox
Gleason's theorem
Observer effect (physics)
Observer (quantum physics)
Philosophy of physics
Quantum cognition
Quantum pseudo-telepathy
Quantum Zeno effect
Wigner's friend
References and notes
Further reading
R. Buniy, S. Hsu and A. Zee On the origin of probability in quantum mechanics (2006)
.
de:Quantenmechanische Messung#Das Messproblem | Measurement problem | Physics | 1,696 |
76,589,675 | https://en.wikipedia.org/wiki/Materialism%20controversy | The materialism controversy (German: ) was a debate in the mid-19th century regarding the implications for current worldviews of the natural sciences. In the 1840s, a new type of materialism was developed, influenced by the methodological advancements in biology and the decline of idealistic philosophy. This form of materialism aimed to explain humans in scientific terms. The controversy revolved around whether the findings of natural sciences were compatible with the concepts of an immaterial soul, a personal God and free will. Additionally, the debate focused on the epistemological requirements of a materialist/mechanist worldview.
In his "Physiologische Briefe" from 1846, the zoologist Carl Vogt explained that "thoughts have roughly the same relationship to the brain as bile has to the liver or urine to the kidneys." In 1854, the physiologist Rudolf Wagner criticised Vogt's polemical commitment to materialism in a speech to the Göttingen Naturalists' Assembly. Wagner argued that Christian faith and natural history were two largely independent spheres. The natural sciences could therefore contribute nothing to the questions of the existence of God, the immaterial soul or free will.
Wagner's attacks provoked equally sharp reactions from Vogt. The materialist point of view was also defended in the following years by the physiologist Jakob Moleschott and the doctor Ludwig Büchner, a brother of the well-known writer Georg Büchner. The materialists presented themselves as champions against philosophical, religious, and political reactionism. They set very different emphases but could count on broad support among the bourgeoisie. The promise of a scientific worldview became a defining element of the cultural conflicts in the late 19th and early 20th centuries.
Development of natural scientific materialism
Emancipation of biology
The rise of popular materialism was encouraged by a critique of romantic-idealistic natural philosophy, which became widespread after 1830 and had an equal influence on natural science, philosophy, and politics.
From the perspective of the history of science, the cell theory founded by Matthias Jacob Schleiden proved to be particularly influential. In his 1838 publication on phytogenesis, Schleiden declared the cell as the fundamental unit of all plants and identified the cell nucleus, which was discovered in 1831, as an essential factor in plant growth. The cellular theory of plant organism structure brought about a reorientation in botany. Before this, botany was primarily focused on macroscopic descriptions of forms. Schleiden's theory of plant structure was combined with a methodological critique of idealistic natural philosophy. The cell theory is based on empirically verifiable observations. This is because "one only knows as many facts about the objects of the physical natural sciences as they have observed themselves". The speculations of natural philosophers, on the other hand, were not based on strict observation. Therefore, all "forging of systems and theories had to be thrown" aside.
Schleiden's programme for a methodically renewed botany was extended to other biological disciplines in subsequent years. In 1839, Theodor Schwann published his Mikroskopische Untersuchungen über die Uebereinstimmung in der Struktur und dem Wachsthum der Thiere und Pflanzen. Schwann explained that the cell theory revealed the general principle of life. All living organisms are composed entirely of cells, and the formation of organs can be explained by the growth and reproduction of cells. Rudolf Virchow proclaimed in this context that "Life is essentially cellular activity". The cell theory thus opened up the prospect of a scientific theory of life, on which materialists were able to build a few years later.
Turning away from idealistic philosophy
Parallel to the methodological reorientation of the biological disciplines, a general criticism of the conservative legacy of German idealism developed in the intellectual climate of the Vormärz. In the natural sciences, criticism of natural philosophical methodology remained moderate, with many biologists remaining staunch anti-materialists. On the other hand, it was only a few years after Hegel died in 1831, when philosophical movements emerged that radically broke with German idealism in terms of ideology.
The critique of religion, as presented by Ludwig Feuerbach in The Essence of Christianity, was socially explosive and of particular importance. Feuerbach had attended every one of Hegel's lectures for over two years and wrote traditional idealist texts until the 1830s, having studied under Hegel in Berlin from 1824. However, Feuerbach and many other young students of Hegel began to have doubts. The young Hegelians were critical not only of the political conservatism of German idealism but also of the detached-from-empirical-observations philosophy of systems. In 1839, Feuerbach finally criticized his teacher's idealistic system in its principles. Although coherent and conclusive, it had distanced itself from sensory nature in an inadmissible way. Philosophy should be grounded in the sensual to arrive at a realization of nature and reality. "Vanity is therefore all speculation that seeks to go beyond nature and man." Feuerbach and the new biological movements shared the idea of a view of nature emancipated from speculation. However, Feuerbach's goal was an anthropological theory of man, not one of the natural sciences.
Feuerbach's explosive anthropology revealed itself most in his philosophy of religion. Idealist philosophy had erred in attempting to prove the truth of the theological doctrines through abstract arguments. In reality, he argued, religion was not a metaphysical truth, but an expression of human needs. The existence of God could not be proven by theologians and philosophers, as God was a human invention. Feuerbach's argument was not directed against religions in general. He acknowledged that there were certainly good reasons for religious belief. But he believed that these reasons were of a psychological nature, as religions satisfied real human needs. In contrast, philosophical-theological proofs of the existence of God were seen as speculative fantasies. Feuerbach's critique of religion was received as a radical attack on the cultural establishment, and by the mid-1840s, he had become the centre of philosophical renewal movements.
Carl Vogt and the political opposition
The materialism controversy was sparked by the materialist theses published by physiologist Carl Vogt from 1847 onwards. Vogt's turn towards materialism was influenced by the scientific and cultural renewal movements, as well as his political development. Vogt was born in Giessen in 1817 and grew up in a family that combined scientific and social revolutionary tendencies. Philipp Friedrich Wilhelm Vogt, Carl's father, was a medical professor in Giessen until he accepted a professorship in Bern in 1834 due to the threat of political persecution. The political entanglements were in the tradition of the family on his mother's side. Louise Follen's three brothers were all forced into emigration due to their nationalist and democratic activities.
In 1817 Adolf Follen drafted an outline for a future imperial constitution. Two years later, he was arrested for "German activities". His subsequent exile in Switzerland spared him from a 10-year prison sentence. Karl Follen defended tyrannicide in a pamphlet and was therefore considered the intellectual author of the assassination attempt on the writer August von Kotzebue. He managed to escape to the United States, where he established himself as a professor of German at Harvard University from 1825. In 1833 Paul Follen, the youngest of the Follen brothers, co-founded the Gießener Auswanderungsgesellschaft with Friedrich Münch. Although the society's goal of establishing a German republic in the United States was unsuccessful, Paul Follen eventually settled in Missouri and became a farmer.
Carl Vogt started studying medicine at Giessen in 1833, but switched to chemistry under the guidance of Justus Liebig. Liebig's experimental methods were in direct contrast to the idealistic philosophy of nature. As a co-founder of organic chemistry, Liebig rejected the separation between living processes and dead matter, providing Vogt with an intellectual foundation for the materialism that he later developed. However, in 1835, Vogt was unable to continue his studies in Giessen due to political circumstances. He had assisted a politically persecuted student to escape, which made him a target of the police. As a result, Vogt emigrated to Switzerland and completed his studies at the Faculty of Medicine in 1839.
During the early 1840s Vogt became involved with the political opposition and new scientific movements. However, he had not yet developed his ideological materialism at this point. It was during his three-year stay in Paris that Vogt's political and ideological radicalisation occurred. His acquaintance with anarchists Mikhail Bakunin and Pierre-Joseph Proudhon had a lasting influence on Vogt's political thinking. Starting in 1845, he began publishing his Physiologische Briefe, which presented physiology clearly and understandably based on Liebig's Chemische Briefe. The initial letters did not reference Vogt's materialism. Only in the letter on nerve power and mental activity, published in 1846, did Vogt state "that the seat of consciousness, will and thought must ultimately be sought solely in the brain".
Initially, political practice took precedence over materialist theory. Vogt had just been appointed professor of zoology in Giessen, through the influence of Liebig and Alexander von Humboldt, when the German Revolution began in March 1848 and democratic forces rose up against the so-called reaction in various parts of Germany. When the March Revolution reached the small university town of Giessen, Vogt was appointed commander of the militia and eventually represented the 6th electoral district of Hesse-Darmstadt in the Frankfurt Parliament from 1848 to 1849. After the Prussian King Frederick William IV rejected the imperial dignity offered to him and political defeats led to the dissolution of the National Assembly, Vogt moved to Stuttgart with the remaining 158 deputies to form the so-called rump parliament, which was forcibly dissolved after only a few weeks in early June 1849.
Appointed as one of the "five imperial regents" by the remaining parliament, Vogt found himself at the centre of the political opposition. On 18 June of that year, Württemberg troops occupied the conference venue. Vogt emigrated to Switzerland and took refuge in his parents' house. Having failed in his political ambitions and deprived of his academic career, he once again focused on biological studies, which he now interpreted in a radically ideological way.
Progression of the debate
Materialism controversy until 1854
In 1850, Vogt travelled to Nice to pursue zoological studies due to a lack of clear academic prospects. The following year, he published a book on animal states that combined zoology with a critical evaluation of the German state of affairs. The book contained a political plea in favour of anarchism, arguing that "every form of government, every law [is] a sign of the lack of completion of our state of nature". Vogt's biologistic argument in favour of anarchism was based on the idea that humans are natural and completely material organisms, in continuity with animal states. According to Vogt, biology implied both materialism and the subversion of the prevailing order. In his book, he referred unequivocally to the German situation:
Vogt succeeded in generating interest among the German public with his popular and polemical attacks. In 1852, he published Bilder aus dem Thierleben, which not only provided a detailed description of materialism but also strongly criticised German university scholars. It is argued that every biologist who thinks clearly must acknowledge the truth of materialism, as the dependence of soul functions on brain functions is evident. This dependence is most clearly shown in animal experiments, so "we can cut off the mental functions of the pigeon piece by piece by removing the brain piece by piece". But if the soul's functions depended on the brain in this way, the soul could not survive the death of the body. And if the brain functions are determined by the laws of nature, then the same must also apply to the soul.
According to Vogt, those who disagreed with these statements had not understood the necessary consequences of physiological research. This criticism was directed towards Rudolf Wagner, an anatomist and physiologist from Göttingen, who in 1851, had criticised Vogt in the Augsburger Allgemeine Zeitung for replacing God with a "blind, unconscious necessity". Wagner had also suggested that the soul of a child is composed of equal parts of the mother's and father's souls. Vogt found this idea to be a useful model. The concept of a composite child's soul not only contradicts the theological belief in the indivisibility of the soul, but is also physiologically nonsensical. Physical characteristics, such as facial features, are naturally inherited from parents to their children, and the same applies to the brain. This is why the inheritance of character traits can be easily explained in materialistic terms.
Göttingen Naturalists' Meeting
During the summer of 1854, the 31st Naturalists' Meeting was held in Göttingen. The meeting was dominated by the debate over the existence of a God-created soul. Wagner used this platform to deliver a lecture on human creation and the substance of the soul. In his lecture, he accused the materialists of undermining the moral foundations of social order by denying free will.
Vogt's materialism contradicted the moral responsibility of the researcher, as it reduced people to blind and irresponsible machines. In the same year, Wagner published a second paper in which he added to his moral criticisms with a general argument on the relationship between knowledge and faith. According to Wagner, these are two largely independent areas, meaning that no scientific knowledge can prove or disprove religious faith.
Physiologists describe the internal structure and function of physical organs. Materialists interpret these descriptions by identifying physical and mental functions. Dualists assume that bodily functions act on an immaterial soul. The natural sciences cannot decide the question of the soul, because neither interpretation could be derived from a physiological description. "There is not a single point in the biblical]doctrine of the soul ... that would contradict any doctrine of modern physiology and natural science."
Charcoal burning faith and science
Wagner's pamphlets brought the materialism debate to the centre of public interest. In response, Vogt wrote the polemic Köhlerglaube und Wissenschaft against Hofrath Rudolf Wagner in Göttingen. The first half of the text consists of ad hominem attacks against Wagner. It was alleged that he was not a serious and productive scientist, but merely adorned himself as the editor of countless works with the research work of others, and that he had also tried to suppress his materialistic critics with the help of state power. Wagner asserted that the materialist denial of free will was socially irresponsible in view of the political events of 1848 (March Revolution), in response to Vogt's anger.
In the second part of the work, Vogt systematically argued against Wagner's thesis of the compatibility of "naive Köhler faith" and scientific knowledge. Vogt stated that anyone who places the soul in a realm beyond empirical verifiability cannot be directly refuted by physiology. However, this assumption is ultimately useless and even incomprehensible. The relationship between soul and brain functions supports the idea of an identity between the body and soul, rather than confirming the axiom of the immaterial soul. This view is shared by Wagner, who acknowledges that all organs, except for the brain, are subject to biological processes. Wagner also does not propose the existence of a "muscle soul" that initiates muscle contraction. He does not argue that there is a kidney soul responsible for the excretion of metabolic products, in addition to the biological processes in the kidneys. "It is only in the case of the brain that this is not recognised; it is only in the case of the brain that a special, illogical conclusion that is not valid for the other organs is accepted".
Food, strength and substance
By 1855 the commitment to materialism had become an influential movement, despite Vogt's polemical theses facing resistance in academic and political circles. Vogt was supported by two younger scientists, Jakob Moleschott and Ludwig Büchner, who also published their materialist ideas in popular science publications. These three authors were portrayed as champions of materialism, which appeared to be conclusive. The debate surrounding materialism catalyzed intensifying popularisation efforts and ideological debates, not themselves without controversy, about the relationship between natural science and society. This led to discussions about the Darwinian theory of evolution from the late 1850s onwards.
Jakob Moleschott was born in 's-Hertogenbosch, the Netherlands, in 1822. He came into contact with Hegel's philosophy at an early age but eventually studied medicine in Heidelberg. Becoming strongly influenced by Feuerbach's philosophy, he focused on questions of metabolism and dietetics. Moleschott believed that food was the fundamental building block of both physical and mental functions, in line with his materialistic convictions. In his book 'Die Lehre der Nahrungsmittel: Für das Volk', Moleschott aimed to popularise his studies and presented detailed dietary plans for the impoverished sections of the population. The materialistic approach should not only deny the existence of an immaterial soul and God but also positively lead people to a better life.
In 1850 Moleschott sent a copy of his work to Feuerbach, who published an influential review titled Die Naturwissenschaft und die Revolution in the same year. In the 1840s, Feuerbach had defined his philosophy beyond idealism and materialism; now he explicitly supported the materialists. The philosophers continued to argue fruitlessly about the relationship between body and soul, while the natural sciences had already found the answer.
Büchner's relationship with the public was even more influential than Moleschott's alliance with Feuerbach. Born in Darmstadt in 1824, Büchner had already met Vogt as a student. In 1848, he became a member of the citizens' militia led by Vogt. After a few unhappy years as an assistant at the medical faculty in Tübingen, Büchner decided to publish a concise summary of the materialist worldview. Kraft und Stoff was a bestseller, with 12 editions published in the first 17 years, and translated into 16 languages. Unlike Vogt and Moleschott, Büchner presented materialism in a summary of findings that could be understood without prior philosophical or scientific knowledge, rather than in the context of his research. The starting point was the unity of force and substance, which Moleschott had already emphasised. Inherent forces are necessary for the existence of substance, just as substance is necessary as a carrier for forces. This unity directly implies the impossibility of immaterial souls, as they would have to exist without a material carrier.
Reactions in the 19th century
Philosophy of neo-Kantianism
Materialism was supported by natural scientists such as Vogt, Moleschott, and Büchner, who presented their theories as the consequences of empirical research. Following the collapse of German idealism, university philosophy appeared discredited as baseless speculation. Even the philosopher Feuerbach entrusted the natural sciences with resolving the philosophical question of the relationship between soul and body.
An influential philosophical critique of materialism did not develop until the 1860s with the emergence of Neo-Kantianism. In 1865, Otto Liebmann strongly criticised philosophical approaches from German idealism to Schopenhauer in his work Kant und die Epigonen, concluding each chapter with the statement "So we must go back to Kant!". Friedrich Albert Lange published his Geschichte des Materialismus the following year, in agreement with this position. Lange accused the materialists of "philosophical dilettantism", which, according to Kantian philosophy, ignored essential insights.
The Critique of Pure Reason's central theme was the question of the conditions of all possible knowledge, including scientific knowledge. Kant argued that human cognition does not depict the world as it really is. All knowledge is already characterized by categories such as "cause and effect" or "unity and multiplicity". These categories are not properties of the things themselves but are brought to the things by humans. Similarly, space and time lack absolute reality and are instead forms of human perception. As all knowledge is already characterised by categories and forms of perception, it is impossible for humans to recognise things in themselves. Therefore, it is not possible to scientifically prove or refute the existence of an immaterial soul, a personal God, or free will.
Lange argues that materialists made a fundamental error by disregarding Kant. Materialism posited that only matter exists, but failed to acknowledge that even scientific descriptions of matter do not describe absolute reality. Such descriptions already assume the categories and forms of perception, and therefore cannot be considered descriptions of things in themselves. Lange's argument is supported by the natural scientist Hermann von Helmholtz, who presented his work on sensory physiology in the 1850s as an empirical confirmation of Kant's work. In his 1855 lecture Ueber das Sehen des Menschen (On Human Vision), Helmholtz first described the physiological foundations of visual perception and then explained that vision is not a true-to-life representation of the outside world. Following Kant's philosophy, it is argued that human interpretation characterises every perception of the outside world, making it impossible to access things in themselves.
Ignoramus et ignorabimus
The scientific materialists did not engage with the arguments of the Neo-Kantians, as they saw the reference to Kant as just another speculative attack on the results of the natural sciences. However, the criticism of physiologist Emil Heinrich Du Bois-Reymond in his 1872 lecture Ueber die Grenzen des Naturerkennens, which declared consciousness to be a fundamental limit of the natural sciences, appeared more concerning. His phrase "Ignoramus et ignorabimus" (Latin for "We do not know and we will never know") sparked a long-lasting controversy regarding the concept of a scientific worldview. This controversy, known as the Ignorabimus controversy, was debated with similar fervour to the Vogt and Wagner debate 20 years prior, and even more so in the political arena. However, this time the materialists were on the defensive.
Du Bois-Reymond argued that the materialists' main problem was their inadequate argument in favour of the unity of brain and soul. Vogt, Moleschott, and Büchner only emphasized the dependence of soul functions on brain functions. Tests on animals had already demonstrated that damage to the brain leads to an impairment of mental functions. The concept of an immaterial soul is therefore rendered implausible by this dependency, making materialism the only acceptable conclusion. However, this consequence simply ignores the question of how the brain generates consciousness in the first place, how Büchner put it himself:
Du Bois-Reymond argued that proving dependency relationships was not enough to support materialism. To reduce consciousness to the brain, one must also explain it through brain functions. But the materialists were unable to provide such an explanation: "What conceivable connection is there between certain movements of certain atoms in my brain on the one hand, and on the other hand the original, undefinable, undeniable facts 'I feel pain, feel desire; I taste sweetness, smell the scent of roses, hear the sound of an organ, see red'." Du Bois-Reymond argued that there is no conceivable connection between the objectively described facts of the physical world and the subjectively determined facts of conscious experience. Therefore, consciousness represented a fundamental barrier to perceiving nature.
Du Bois-Reymond's Ignorabimus speech highlighted a fundamental weakness of scientific materialism. Although Vogt, Moleschott, and Büchner claimed that consciousness was material, they also acknowledged that brain functions could not explain it. This problem contributed to the development of a scientific worldview that shifted from materialism to monism in the late 19th century. Ernst Haeckel, the most famous proponent of a "monistic worldview", shared the materialists' rejection of dualism, idealism, and the concept of an immortal soul.
However, Haeckel's monism differs from materialism in that it does not recognise the primacy of matter. According to it, body and spirit are inseparable and equally fundamental aspects of a substance. This monism appears to circumvent du Bois-Reymond's problem. If matter and spirit are equally fundamental aspects of a substance, then spirit no longer needs to be explained by matter.
Büchner also viewed monism as the appropriate response to philosophical criticism of materialism. In a letter to Haeckel dated 1875, he wrote:
Political and ideological impact
Although the materialists gained popularity among the general public, they were not successful politically. Vogt, Moleschott, and Büchner lost their careers at German universities due to their advocacy of materialism. The revolutionary nature of Vogt's materialism could not succeed during the reactionary era after 1848. During the political movements of the latter half of the 19th century, scientific materialism failed to exert significant influence, partly due to differences with Karl Marx and Friedrich Engels. Marx labelled Vogt as a "small-university beer bouncer and misguided Barrot of the Reich", and the conflicts escalated into personal denunciations. For instance, Marx's circle also accused Vogt of having worked as a French spy.
The altered political situation is also apparent in the work of Ernst Haeckel, who embraced the concept of a scientific worldview from the materialists but gave it a fresh political orientation. Haeckel, who was 17 years younger than Vogt, established himself as a proponent of Darwinism in Germany during the 1860s. In his polemical rejection of "church-wisdom and ... after-philosophy", Haeckel closely resembled the scientific materialists. Vogt viewed physiology as the starting point for a scientific worldview, while Haeckel made a similar claim in referring to Charles Darwin:
However, for Haeckel, the concept of "progress" was primarily anti-clerical, opposing the church rather than the state. Bismarck's Kulturkampf, which began in 1871, provided Haeckel with an opportunity to connect anti-clerical monism with Prussian politics. As the First World War approached, Haeckel's statements became increasingly nationalistic, with racial theories and eugenics providing a seemingly scientific basis for chauvinistic politics. Therefore, Vogt's vision of a politically revolutionary natural science ultimately proved itself short-lived.
Reception in the 20th century
In the 19th century, scientific materialism played a significant role in ideological debates. The discussions around Darwin's theory of evolution and Haeckel's monism gained prominence in the 1860s. Nonetheless, the question of a scientific worldview remained controversial, and Büchner's Kraft und Stoff remained a bestseller.
The First World War and Haeckel's death in 1919 were significant turning points. In the Weimar Republic, the debates of the 1850s were no longer relevant, and the philosophical currents of the interwar period consistently criticised materialism, despite differences in content. This also applied to logical positivism, which adhered to the idea of a scientific worldview but interpreted it in an anti-metaphysical way. The logical positivists' criterion of meaning said that a proposition was only comprehensible if it could be verified empirically. This meant that materialism, monism, idealism, and dualism failed to meet this criterion and were considered misguided fantasies of a speculative era of philosophy. Materialist theories of consciousness were not revisited until the 1950s, when they were taken up from Anglo-Saxon philosophy. By this point, the scientific materialists of the 19th century had been forgotten. None of these new texts refer to Vogt, Moleschott or Büchner. Instead, the materialists of the post-war period focused on contemporary neuroscience.
Scientific materialism was largely ignored in the history of science and philosophy until the 1970s. Its reception in the GDR began relatively early under the influence of Dieter Wittich, who received his doctorate in 1960 with a thesis on the scientific materialists. In 1971, he published a collection of texts entitled Vogt, Moleschott, Büchner: Schriften zum kleinbürgerlichen Materialismus in Deutschland with Akademie Verlag. In his detailed introduction, Wittich, who held the only chair for epistemology in the GDR, honoured the political, scientific, and religious-critical work of the materialists. However, he also pointed out their philosophical shortcomings. He stated that the "petty-bourgeois materialists" were "vulgar materialists because they insisted on metaphysical materialism at a time when dialectical materialism had become not only a possibility but also a reality".
In 1977, Frederick Gregory, an American historian of science, published his monograph Scientific Materialism in Nineteenth Century Germany, which is still considered a standard work today. Gregory argues that the significance of Vogt, Moleschott and Büchner lies not in their specific elaboration of materialism, but rather in the social impact of their scientifically motivated criticism of religion, philosophy and politics. "The overwhelming trademark of the scientific materialists, as far as the historian is concerned, is not their materialism, but their atheism, more properly their humanistic religion."
In line with Gregory's judgement, the significance of the materialists in the secularisation process of the 19th century is generally acknowledged in the current research literature, while their philosophical positions are still subject to fierce criticism in some cases. Renate Wahsner, for example, explains: "It is not possible to contradict the view held in the literature, which denies all three of them sharpness and depth of thought". Not all authors share this negative assessment, with Kurt Bayertz, for example, defending the relevance of the scientific materialists, as they had developed "the first fully developed form of modern materialism ... Although the form of materialism developed by Vogt, Moleschott and Büchner is only one form of materialism, it is the form that is typical of modernity and the most influential and effective form today." An examination of current materialism controversies must therefore begin in the 19th century.
References
Bibliography
Primary literature
Secondary literature
External links
Digitised works of the scientific materialists in the Internet Archive
Zeno. "Lexikoneintrag zu »Materialismus«. Eisler, Rudolf: Wörterbuch der philosophischen ..." www.zeno.org (in German). Retrieved 2024-04-11.
Article on the materialism controversy in the German Medical Journal, 2006 (in German)
Scientific controversies
19th century in philosophy
Philosophy of biology
History of science
Materialism | Materialism controversy | Physics,Technology | 6,399 |
77,231,892 | https://en.wikipedia.org/wiki/Carel%20Industries | Carel Industries is an Italian multinational company that designs, manufactures, and markets hardware and software for managing air conditioning, refrigeration, humidification, and evaporative cooling systems. Founded in 1973 in Brugine, Padua, Italy, as of 2023 it operates 15 production sites and employs over 2,500 people. In 2018, it was listed on the Milan Stock Exchange in the FTSE Italia STAR and FTSE Italia Small Cap indices.
History
The firm was founded in 1973 in the province of Padua as a business-to-business company, as C.AR.EL., Costruzioni ARmadi ELettrici. Carel began operating as a supplier for a manufacturer of air conditioning units for computing centres (Hiross), producing its electrotechnical component. In 1981, the company designed one of the first microprocessor-based controllers in Europe for the air-conditioning sector, which was launched on the market the following year under the name Miprosent. This was a parametric model, already pre-programmed in the factory and suitable for mass production and large volumes.
Subsequently, a programmable control was developed based on a new software programming language.
In the late 1990s, the refrigeration district, which would later become one of the largest in the world, began to develop around Carel.
International expansion
Within few years, the company expanded first on the domestic market and then on the European market, initially in the air conditioning and humidification sectors and soon afterwards in the refrigeration sector. The production of humidification systems with a significant electromechanical component continued, but investment was mainly concentrated on electronics. International expansion began in the 1990s with the opening of sales branches in France, Great Britain, South America and Germany.
During the 2000s, branches were opened in: China, Australia, USA, Asia, Spain, India, South Africa, Russia and Korea. This period also saw the opening of production sites in the United States, China, Brazil and, in 2015, Croatia, as well as sales offices in Northern Europe, Mexico, the Middle East, Thailand, Poland, Ukraine and Japan. In late 2018, CAREL also began to grow externally and acquired the companies Recuperator, HygroMatik and Enginia.
In 2021, it continued its international expansion with the acquisition of CFM Soğutma ve Otomasyon A.Ş, a long-standing distributor and partner in Turkey. The company growth also continued in 2022 through the acquisitions of the Italian companies Arion S.r.l. and Sauber, Germany's Klingenburg and the American Senva. Two more acquisitions were completed in 2023 with the entry of New Zealand's Eurotec and Norway's Kiona into the group.
Operations
Carel specializes in the production of hardware and software components for improving energy efficiency in HVAC and refrigeration systems. The company's products are used across commercial, industrial, and residential applications.
In the HVAC market, the company offers hardware to be integrated in individual units, such as heat pumps, shelters, rooftops, Computer Room Air Conditioners (CRACs), chillers and air handling units. CAREL also delivers products for the industrial and commercial fields, for example, entire plants/systems for shopping malls, supermarkets, museums and data centers.
In the refrigeration market, the firm is active in the design, manufacture and marketing of control and humidification systems within the food retail and food service application segments. Just like for the HVAC market, the company solutions can be integrated both in single units, such as bottle cooler, plug-in refrigerator, multiplexed refrigerator, compressor rack and condensing unit, and in complex systems, such as plants or systems for supermarkets and restaurants.
In addition to its products, Carel also provides services such as commissioning, remote operation, and monitoring of the group's HVAC/R systems through IoT solutions.
Research and development
The group has research laboratories in HVACR applications in Italy, China and the United States. A laboratory dedicated to air humidification systems and evaporative coolers exists in Padua. In April 2024, the company opened a new research center at its headquarters. The space covers 4500 square meters and includes climate chambers certified for the use of flammable refrigerants, testing booths, and a training center.
References
Electronics companies
1973 establishments in Italy | Carel Industries | Engineering | 910 |
1,246,675 | https://en.wikipedia.org/wiki/Dead%20time | For detection systems that record discrete events, such as particle and nuclear detectors, the dead time is the time after each event during which the system is not able to record another event.
An everyday life example of this is what happens when someone takes a photo using a flash – another picture cannot be taken immediately afterward because the flash needs a few seconds to recharge. In addition to lowering the detection efficiency, dead times can have other effects, such as creating possible exploits in quantum cryptography.
Overview
The total dead time of a detection system is usually due to the contributions of the intrinsic dead time of the detector (for example the ion drift time in a gaseous ionization detector), of the analog front end (for example the shaping time of a spectroscopy amplifier) and of the data acquisition (the conversion time of the analog-to-digital converters and the readout and storage times).
The intrinsic dead time of a detector is often due to its physical characteristics; for example a spark chamber is "dead" until the potential between the plates recovers above a high enough value. In other cases the detector, after a first event, is still "live" and does produce a signal for the successive event, but the signal is such that the detector readout is unable to discriminate and separate them, resulting in an event loss or in a so-called "pile-up" event where, for example, a (possibly partial) sum of the deposited energies from the two events is recorded instead. In some cases this can be minimised by an appropriate design, but often only at the expense of other properties like energy resolution.
The analog electronics can also introduce dead time; in particular a shaping spectroscopy amplifier needs to integrate a fast rise, slow fall signal over the longest possible time (usually 0.5–10 microseconds) to attain the best possible resolution, such that the user needs to choose a compromise between event rate and resolution.
Trigger logic is another possible source of dead time; beyond the proper time of the signal processing, spurious triggers caused by noise need to be taken into account.
Finally, digitisation, readout and storage of the event, especially in detection systems with large number of channels like those used in modern High Energy Physics experiments, also contribute to the total dead time. To alleviate the issue, medium and large experiments use sophisticated pipelining and multi-level trigger logic to reduce the readout rates.
From the total time a detection system is running, the dead time must be subtracted to obtain the live time.
Paralyzable and non-paralyzable behaviour
A detector, or detection system, can be characterized by a paralyzable or non-paralyzable behaviour.
In a non-paralyzable detector, an event happening during the dead time is simply lost, so that with an increasing event rate the detector will reach a saturation rate equal to the inverse of the dead time. In a paralyzable detector, an event happening during the dead time will not just be missed, but will restart the dead time, so that with increasing rate the detector will reach a saturation point where it will be incapable of recording any event at all. A semi-paralyzable detector exhibits an intermediate behaviour, in which the event arriving during dead time does extend it, but not by the full amount, resulting in a detection rate that decreases when the event rate approaches saturation.
Analysis
It will be assumed that the events are occurring randomly with an average frequency of f. That is, they constitute a Poisson process. The probability that an event will occur in an infinitesimal time interval dt is then f dt. It follows that the probability P(t) that an event will occur at time t to t+dt with no events occurring between t=0 and time t is given by the exponential distribution (Lucke 1974, Meeks 2008):
The expected time between events is then
Non-paralyzable analysis
For the non-paralyzable case, with a dead time of , the probability of measuring an event between and is zero. Otherwise the probabilities of measurement are the same as the event probabilities. The probability of measuring an event at time t with no intervening measurements is then given by an exponential distribution shifted by :
for
for
The expected time between measurements is then
In other words, if counts are recorded during a particular time interval and the dead time is known, the actual number of events (N) may be estimated by
If the dead time is not known, a statistical analysis can yield the correct count. For example, (Meeks 2008), if are a set of intervals between measurements, then the will have a shifted exponential distribution, but if a fixed value D is subtracted from each interval, with negative values discarded, the distribution will be exponential as long as D is greater than the dead time . For an exponential distribution, the following relationship holds:
where n is any integer. If the above function is estimated for many measured intervals with various values of D subtracted (and for various values of n) it should be found that for values of D above a certain threshold, the above equation will be nearly true, and the count rate derived from these modified intervals will be equal to the true count rate.
Time-To-Count
With a modern microprocessor based ratemeter one technique for measuring field strength with detectors (e.g., Geiger–Müller tubes) with a recovery time is Time-To-Count. In this technique, the detector is armed at the same time a counter is started. When a strike occurs, the counter is stopped. If this happens many times in a certain time period (e.g., two seconds), then the mean time between strikes can be determined, and thus the count rate. Live time, dead time, and total time are thus measured, not estimated. This technique is used quite widely in radiation monitoring systems used in nuclear power generating stations.
See also
Data acquisition (DAQ)
Allan variance
Photomultiplier
Positron emission tomography
Class-D amplifier
References
Further reading
Morris, S.L. and Naftilan, S.A., "Determining Photometric Dead Time by Using Hydrogen Filters", Astron. Astrophys. Suppl. Ser. 107, 71-75, Oct. 1994
Nuclear physics | Dead time | Physics | 1,300 |
51,518,781 | https://en.wikipedia.org/wiki/Directory-based%20cache%20coherence | In computer engineering, directory-based cache coherence is a type of cache coherence mechanism, where directories are used to manage caches in place of bus snooping. Bus snooping methods scale poorly due to the use of broadcasting. These methods can be used to target both performance and scalability of directory systems.
Full bit vector format
In the full bit vector format, for each possible cache line in memory, a bit is used to track whether every individual processor has that line stored in its cache. The full bit vector format is the simplest structure to implement, but the least scalable. The SGI Origin 2000 uses a combination of full bit vector and coarse bit vector depending on the number of processors.
Each directory entry must have 1 bit stored per processor per cache line, along with bits for tracking the state of the directory. This leads to the total size required being (number of processors)×number of cache lines, having a storage overhead ratio of (number of processors)/(cache block size×8).
It can be observed that directory overhead scales linearly with the number of processors. While this may be fine for a small number of processors, when implemented in large systems the size requirements for the directory becomes excessive. For example, with a block size of 32 bytes and 1024 processors, the storage overhead ratio becomes 1024/(32×8) = 400%.
Coarse bit vector format
The coarse bit vector format has a similar structure to the full bit vector format, though rather than tracking one bit per processor for every cache line, the directory groups several processors into nodes, storing whether a cache line is stored in a node rather than a processor. This improves size requirements at the expense of bus traffic saving (processors per node - 1)×(total lines) bits of space. Thus the ratio overhead is the same, just replacing number of processors with number of processor groups. When a bus request is made for a cache line that one processor in the group has, the directory broadcasts the signal into every processor in the node rather than just the caches that contain it, leading to unnecessary traffic to nodes that do not have the data cached.
In this case the directory entry uses 1 bit for a group of processors for each cache line. For the same example as Full Bit Vector format if we consider 1 bit for 8 processors as a group, then the storage overhead will be 128/(32×8)=50%. This is a significant improvement over the Full Bit Vector format.
Sparse directory format
A cache only stores a small subset of blocks in main memory at a particular time. Hence most of the entries in the directory will belong to uncached blocks. In the sparse directory format the wastage is reduced by storing only the cached blocks in the directory. Consider a processor with a cache size of 64KB with a block size of 32 bytes and the main memory size to be 4MB. The maximum number of entries that the directory can have in the sparse directory format is 2048. If the directory has an entry for all the blocks in the memory the number of entries in the directory will be 131072. Thus it is evident that the storage improvement provided by sparse directory format is very significant.
Number-balanced binary tree format
In this format the directory is decentralised and distributed among the caches that share a memory block. Different caches that share a memory block are arranged in the form of a binary tree. The cache that accesses a memory block first is the root node. Each memory block has the root node information (HEAD) and Sharing counter field (SC). The SC field has the number of caches that share the block. Each cache entry has pointers to the next sharing caches known as L-CHD and R-CHD. A condition for this directory is that the binary tree should be number balanced, i.e the number of nodes in the left sub tree must be equal to or one greater than the number of nodes in the right subtree. All the subtrees should also be number balanced.
Chained directory format
In this format the memory holds the directory pointer to the latest cache that accessed the block and each cache has the pointer to the previous cache that accessed the block. So when a processor sends a write request to a block in memory, the processor sends invalidations down the chain of pointers. In this directory when a cache block is replaced we need to traverse the list in order to change the directory which increases latency. In order to prevent this doubly linked lists are widely used now in which each cached copy has pointers to previous and the next cache that accesses the block.
Limited pointer format
The limited pointer format uses a set number of pointers to track the processors that are caching the data. When a new processor caches a block, a free pointer is chosen from a pool to point to that processor. There are a few options for handling cases when the number of sharers exceeds the number of free pointers. One method is to invalidate one of the sharers, using its pointer for the new requestor, though this can be costly in cases where a block has a large number of readers, such as a lock. Another method is to have a separate pool of free pointers available to all the blocks. This method is usually effective as the number of blocks shared by a large number of processors is not normally very large.
References
Computer architecture | Directory-based cache coherence | Technology,Engineering | 1,109 |
187,801 | https://en.wikipedia.org/wiki/Shopping%20cart | A shopping cart (American English), trolley (British English, Australian English), or buggy (Southern American English, Appalachian English), also known by a variety of other names, is a wheeled cart supplied by a shop or store, especially supermarkets, for use by customers inside the premises for transport of merchandise as they move around the premises, while shopping, prior to heading to the checkout counter, cashiers or tills. Increasing the amount of goods a shopper can collect increases the quantities they are likely to purchase in a single trip, boosting store profitability.
In many cases, customers can then also use the cart to transport their purchased goods to their vehicles. However, some carts are designed to prevent them from leaving either the store or the designated parking area by magnetically locking the wheels. In many places in the United States, Canada and the United Kingdom, customers are encouraged to leave the carts in designated areas within the parking lot, and store employees will return the carts to the entrances. In some areas, carts are connected by locking mechanisms that require the insertion of a coin or token to release an individual cart. Returning the cart to its designated area releases the coin to the customer.
Design
Most modern shopping carts are made of metal or a combination of metal and plastic and have been designed to nest within each other in a line to facilitate collecting and moving many at one time and also to save on storage space. The carts can come in many sizes, with larger ones able to carry a child. There are also specialized carts designed for two children, and electric mobility scooters with baskets designed for individuals with disabilities.
As of 2006, approximately 24,000 children are injured in the United States each year in shopping carts. Some stores both in the U.S. and internationally have child carrying carts that look like a car or van with a seat where a child can sit equipped with a steering wheel and sometimes a horn. Such "Car-Carts" may offer protection and convenience by keeping the child restrained, lower to the ground, protected from falling items, and amused.
Shopping carts are usually fitted with four wheels, however if any one wheel jams the cart can become difficult to handle. Most carts in the United States have swivel wheels at the front, while the rear wheels are fixed in orientation, while in Europe it is more common to have four swivel wheels. This difference in design correlates with smaller retail premises in Europe. The front part of the cart is often sectioned off in order to place household goods such as bleach, cleaning products etc. so that they do not mix with edible products.
An alternative to the shopping cart is a small hand-held shopping basket. A customer may prefer a basket for a small amount of merchandise. Small shops, where carts would be impractical, often supply only baskets, or may offer a small cart which uses an inserted shopping basket within the frame of the cart to provide either choice to a customer.
History
Development of first shopping cart by Sylvan Goldman
One of the first shopping carts was introduced on June 4, 1937, the invention of Sylvan Goldman, owner of the Humpty Dumpty supermarket chain in Oklahoma. One night, in 1936, Goldman sat in his office wondering how customers might move more groceries. He found a wooden folding chair and put a basket on the seat and wheels on the legs. Goldman and one of his employees, a mechanic named Fred Young, began tinkering. Their first shopping cart was a metal frame that held two wire baskets. Since they were inspired by the folding chair, Goldman called his carts "folding basket carriers". Another mechanic, Arthur Kosted, developed a method to mass-produce the carts by inventing an assembly line capable of forming and welding the wire. The cart was awarded patent number 2,196,914 on April 9, 1940 (Filing date: March 14, 1938), titled, "Folding Basket Carriage for Self-Service Stores". They advertised the invention as part of a new "No Basket Carrying Plan". Goldman had already pioneered self-serve stores and carts were part of the self-serve retail concept.
The invention did not catch on immediately. Men found them effeminate; women found them suggestive of a baby carriage. "I've pushed my last baby," an offended woman informed Goldman. After Goldman hired several male and female models to push his new invention around his store and demonstrate their utility, as well as greeters to explain their use, shopping carts became extremely popular and Goldman became a multimillionaire. In urban areas like New York City, where transporting groceries home from the store's parking lot is more likely to involve walking and/or a trip by public transportation than a car ride, privately owned carts resembling Goldman's design are still popular. Instead of baskets, these carts are built to hold the paper bags dispensed by the grocery store.
Another shopping cart innovator was Orla Watson, who invented the swinging rear door to allow for "nesting" in 1946. Orla Watson continued to make modifications to his original design. Advice from his trusted business partners Fred Taylor, a grocery store owner in Kansas City, and George O'Donnell, a grocery store refrigeration salesman, and the incorporation of Watson's swinging door yielded the familiar nesting cart that we see today using the "double-decker" approach. Goldman patented a similar version of the cart with only one basket rather than the double-decker feature, which he called the "Nest-Kart" in 1948, over one year after Watson filed for his patent. The Nest-Kart incorporated the same nesting mechanism present on the shopping carts designed by Watson, and an interference investigation was ordered by Telescope Carts, Inc. alleging infringement of the patent in 1948. After a protracted legal battle, Goldman ultimately recognized Watson's invention and paid one dollar in damages for counterfeit, in exchange for which Watson granted Goldman an exclusive operating license (apart from the three licenses that had already been granted).
In 1909, Bessie DeCamp invented a seat belt for chairs, go-carts or carriages. This was well before shopping carts with child seating areas were invented. Goldman introduced a child seating area on shopping carts in 1947. For whatever reason, it wasn't until 1967 that seat belts for shopping carts were introduced by David Allen. It was high tech for the time, because it was a retractable seat belt.
Development of nesting carts by Orla Watson
In 1946, Orla Watson devised a system for a telescoping (i.e., "nesting") shopping cart which did not require assembly or disassembly of its parts before and after use like Goldman's cart; Goldman's design up until this point required that the cart be unfolded much like a folding chair. This cart could be fitted into another cart for compact storage via a swinging one-way rear door. The swinging rear door formed the basis of the patent claim, and was a major innovation in the evolution of the modern shopping cart. Watson applied for a patent on his shopping cart invention in 1946, but Goldman contested it and filed an application for a similar patent with the swinging door feature on a shopping cart with only one basket in 1948 which Goldman named the "Nest-Kart". After considerable litigation and allegations of patent infringement, Goldman relinquished his rights to the patent in 1949 to Watson and his company, Telescope Carts, Inc. realizing that the swinging rear door feature was the key to Watson's patent. Watson was awarded patent #2,479,530 on August 16, 1949. In exchange, Goldman was granted an exclusive licensing right in addition to the three other licenses previously granted; Telescope Carts, Inc. continued to receive royalties for each cart produced by Goldman's company that incorporated the "nesting" design. This included any shopping cart utilizing his hinged rear door, including the familiar single basket "nesting" designs similar to those used in the present.
Owing to its overwhelming success, many different manufacturers desired to produce shopping carts with the rear swinging door feature but were denied due to the exclusive license issued to Goldman. The federal government filed a lawsuit against Telescope Carts, Inc. in 1950 alleging the exclusive license granted to Goldman was invalid, and a Consent Decree was entered into where Telescope Carts, Inc. agreed to offer the same license to any manufacturer. Orla Watson and Telescope Carts, Inc. licensed their telescoping shopping cart design to several manufacturers throughout the 1950s and 1960s until the patent expired.
New developments
In 2012, a driverless shopping cart was made by Chaotic Moon Labs. The device, called "Project Sk8" or "Smarter Cart" was basically a cart fitted with Windows Kinect (to detect obstacles), and an electric drivetrain, and used in conjunction with a Windows 8 tablet. For smaller stores, shopping baskets with wheels can be used either as a large basket or a small cart. These carts are designed for indoor use only.
In 2017, a mobile device shelf was added to shopping carts at Target stores to support the digital in-store shopping experience. The shelf was invented and designed by Nick Dyer, a former employee of Target.
The introduction of "EASY Shopper" in 2019 by Pentland Firth Software GmbH in partnership with the German retailer EDEKA represents another step in the evolution of shopping carts. Equipped with a tablet, barcode scanner, and cashierless checkout system, the smart shopping cart aims to provide customers with a more streamlined and convenient shopping experience. The system utilizes computer vision to accurately track items in the cart and allow customers to scan and pay for their purchases as they shop, reducing the need to stand in line and wait to pay for their items.
Retail store acceptance
Past studies determined that retailers who did not offer shopping carts such as Sears suffered lower sales in comparison to retailers who did use shopping carts.
Subsequent to the introduction of shopping carts and centralized checkout lines at Sears stores, the company noticed a correlating increase in sales.
In 2004, British supermarket chain Tesco trialed shopping carts with user-adjustable wheel resistance, heart rate monitoring and calorie counting hardware in an effort to raise awareness of health issues. The cart's introduction coincided with Tesco's sponsorship of Cancer Research UK's fundraising event Race for Life.
Also in 2004, shopping carts were identified as a source of pathogens and became a major public health concern. This was primarily due to the media spotlight on a Japanese research study revealing large amounts of bacteria on shopping carts. Those findings were later backed by a University of Arizona study in 2007.
In 2009, researchers developed prototypes of computerized context aware shopping carts by attaching tablet computers to ordinary carts. Initial field trials showed that the prototype's context awareness provided an opportunity for enhancing and altering the shopping experience.
Some retailers, such as Target, have begun using carts fully made of recycled plastic with the only metal part being the wheel axles, drawing away from the established metal cart design. Target's cart has won design awards for its improved casters, interchangeable plastic parts to simplify repairs, and handles that improve maneuverability. Other cart designs also incorporate additional features such as a cup holder for cold or hot drinks or a bouquet of flowers, along with other features such as a secure shelf for a tablet computer or mobile phone to allow the use of mobile coupons and circulars, or as seen in an all-plastic design created for the Wisconsin-based Festival Foods and also used by Whole Foods Market by Bemis Manufacturing Company, all of these features, along with extra rungs on the side rail designed to attach plastic bags or carry handles for beverages. Smaller half-sized carts for smaller shopping trips have also become common.
Deposits
In many countries, the customer has to make a small deposit by inserting a coin, token or card, which is returned if and when the customer returns the cart to a designated cart parking point. The system works through a lock mounted on the handle of the cart, connecting it to a chain mounted on the cart in front of it when nested together, or to a chain mounted on a cart collection corral. Inserting the token unlocks the chain, and reinserting the chain locks it in place and ejects the token for user to retrieve.
One motivation behind the deposit system is to reduce the expense of employees having to gather carts that are not returned, and to avoid damage done by runaway carts. Another benefit is that carts are less likely to be removed from the store premises and abandoned in the surrounding neighborhood. Carts that are not returned may be returned voluntarily by a pedestrian, with the deposit coin acting as a reward.
Although almost ubiquitous in continental Europe and the UK, the deposit system is less common in Canada and has not been widely adopted in the United States, with the exception of some chains like ALDI, which require a $0.25 deposit. One of the first store chains to use the $0.25 deposit system in the US was the Real Superstore (a subsidiary of National Supermarkets) in the early 1990s. Other stores such as Costco and ShopRite also use the coin deposit system, but it is not used at all of their locations.
In Australia, deposit systems are common in some local government areas, as they have been made compulsory by local law. Usually, all ALDI stores, and most Coles and Woolworths stores will have a lock mechanism on their carts that requires a $1 or $2 coin to unlock.
The deposit varies, but usually coins of higher value, such as €1, £1, or $1 are used. While the deposit systems usually are designed to accommodate a certain size of domestic coin, foreign coins, former currencies (like German D-Marks), or even appropriately folded pieces of cardboard can be used to unlock the carts as well. Cart collectors are also usually provided with a special key that they can use to unlock the carts from the cart bay and get the key back.
Some retailers sell "tokens" as an alternative to coins, often for charity. Merchandising companies also offer branded shopping tokens as a product.
Theft prevention
Shopping cart theft can be a costly problem with stores that use them. The carts, which typically cost between $75 and $150 each, with some models costing $300–400, are removed by people for various purposes. To prevent theft, estimated at $800 million worldwide per annum, stores use various security systems as discussed below.
Cart retrieval service
Most retailers in North America utilize a cart retrieval service, which collects carts found off the store's premises and returns them to the store for a fee. The primary strength of this system is the ability of pedestrian customers to take purchases home and allow retailers to recapture abandoned carts in a timely manner at a fraction of the cost of a replacement cart. It also allows retailers to maintain their cart inventories without an expensive capital outlay. A drawback of this method is that it is reactive, instead of proactively preventing the carts from leaving the store premises.
Electronic and magnetic
Electronic systems are sometimes used by retailers. Each shopping cart is fitted with an electronic locking wheel clamp, or "boot". A transmitter with a thin wire is placed around the perimeter of the parking lot, and the boot locks when the cart leaves the designated area. Store personnel must then deactivate the lock with a handheld remote control to return the cart to stock. Often, a line is painted in front of the broadcast range to warn customers that their cart will stop when rolled past the line. However, these systems are very expensive to install and although helpful, are not foolproof. The wheels can be lifted over the electronic barrier and/or pushed hard enough that the locks break. There are also safety concerns if the person pushing the trolley is running, and also if the trolley doesn't lock and is taken onto a road, locking due to magnetic materials under the road. Some cities have required retailers to install locking wheel systems on their shopping carts. In some cases, electronic systems companies have encouraged passage of such laws to create a captive audience of potential customers.
Physical
A low-tech form of theft prevention utilizes a physical impediment, such as vertical posts at the store entrance to keep carts from being taken into the parking lot. This method also impedes physically disabled customers, which may be illegal in many jurisdictions. For example, in the United States it would be a violation of the Americans with Disabilities Act of 1990.
Another method (used for example by UK supermarket Iceland) is to mount a pole taller than the entrance, onto the shopping cart, so that the pole will block exit of the cart. However, this method requires that the store aisles be higher than the pole, including lights, piping, any overhead signage and fixtures. It also prevents customers from carting their purchases to their cars in the store's carts. Many customers learn to bring their own folding or otherwise collapsible cart with them, which they can usually hang on the store's cart while shopping.
A further system is to use a cattle grid style system. All pedestrian exits have specially designed flooring tiles, which, along with specially designed wheels on the cart, will immobilize the cart as they roll onto them. Like the magnetic systems, this can easily be overcome by lifting the cart over the tiles.
Name
The names of shopping carts vary by region. The following names are region-specific names for shopping carts. Many of these names may be used alone or in descriptive phrases such as grocery , shopping , or supermarket for disambiguation:
cart or basket – the United States, Canada and the Philippines
buggy – used by some in Southeast Michigan, Western Pennsylvania (where it is considered part of the region's dialect), the Southern United States and certain parts of Canada
trolley – the United Kingdom, Ireland, Australia, New Zealand, Malaysia, Trinidad and Tobago, South Africa and some regions of Canada. Was also formerly used in the Philippines
carriage – used by some in the New England region of the United States
barrae or coohudder – some places in Scotland
bascart – various regions
wagon – New York, Hawaii
trundler – some places in New Zealand
wheelbasket – some places in the Eastern United States, notably Western Massachusetts
For disabled people
Special electronic shopping carts are provided by many retailers for the elderly or disabled people. These are essentially electric wheelchairs with an attached basket. They allow customers to navigate around the store and collect items.
Manually powered carts are also available specifically designed for use by wheelchair users. A still-to-be-implemented aid for people with disabilities is the addition of a guide wheel at the center of rotation of a cart with four caster wheels. In order to allow the nesting of carts to be unhindered, this guide wheel is attached to the front of the cart with a piece of spring steel which bends under the cart's weight.
Caroline's Carts are designed for aiding non-ambulatory adults or larger children, but require an additional person to push.
Conceptual detours of the shopping cart in art, design and consumerism
Shopping cart manufacturers such as Caddie, Wanzl, or Brüder Siegel maintained intensive direct and indirect mutual business relations with artists, graphic designers, industrial and furniture designers such as Charles Eames, Harry Bertoia, or Verner Panton since the market launch of the shopping cart - not only for new and further developments of their own shopping carts and wire basket goods, but also for advertising and PR purposes. Olivier Mourgue, Otl Aicher, as well as other artists and designers had wire furniture or artwork made by shopping cart manufacturers.
One of the most famous thematizations of a shopping cart in art is the 1970 sculpture "Supermarket Lady" by US pop artist Duane Hanson, which is critical of consumerism.
In 1983, the neoist "one-man artist group" Stiletto Studios, from Berlin converted a 'stray' shopping cart into an 'inverted' cantilever-wire chair on the principle of objet trouvé. As a design simulation critical of consumer culture, Stiletto's ironically titled "Consumer's Rest" Lounge Chair recurred to the fact that Eames' and Bertoia's wire furniture were already over-aestheticized adaptations of the contemporary advent of shopping carts in the United States, and thus were themselves already recursions to the consumer revolutionary context of the International Style in architecture and design.
By far, most stolen shopping carts that are not returned and left outside their location, however, are misappropriated by occasional subsequent and secondary users without any artistic or cultural-critical readymade intentions as emergency solutions. Other uses of shopping carts include improvised pieces of furniture (for example, as laundry baskets), or universal nomadic furniture for the household goods of the homeless, or, ignoring the fact that the zinc and plastic coatings of the wire surfaces are harmful to health when heated, as ad hoc barbecue grills.
See also
Motorized shopping cart
Shopping cart theory
Toy wagon
Trolley (disambiguation)
References
External links
Shopping Cart–Related Injuries to Children American Academy Of Pediatrics
Paper on the history of the shopping cart
The "Telescopic Shopping Cart Collection" at the National Museum of American History (Smithsonian Institution)
Reversing the Operation of CAPS Shopping Cart Wheel Locks
DEFRA guidance on the security of shopping trolleys.
Guidance on Section 99 and Schedule 4 of the Environmental Protection Act 1990 as amended by the Clean Neighbourhoods and Environment Act 2005 . DEFRA
Daugherty, Julia Ann P. "Encyclopedia of Oklahoma History and Culture." Oklahoma State University - Library - Home. Web. 11 Oct. 2010.
Retail store elements
American inventions
Carts
1937 introductions
1946 introductions | Shopping cart | Technology | 4,428 |
29,269,447 | https://en.wikipedia.org/wiki/Somatotropin%20family | The Somatotropin family is a protein family whose titular representative is somatotropin, also known as growth hormone, a hormone that plays an important role in growth control. Other members include choriomammotropin (lactogen), its placental analogue; prolactin, which promotes lactation in the mammary gland, and placental prolactin-related proteins; proliferin and proliferin related protein; and somatolactin from various fishes. The 3D structure of bovine somatotropin has been predicted using a combination of heuristics and energy minimisation.
Human peptides from this family
CSH1; CSH2; CSHL1; GH1; GH2 (hGH-V); PRL;
References
Protein domains
Hormones of the somatotropic axis | Somatotropin family | Biology | 183 |
29,728,586 | https://en.wikipedia.org/wiki/Spatial%20Archive%20and%20Interchange%20Format | The Spatial Archive and Interchange Format (SAIF, pronounced safe) was defined in the early 1990s as a self-describing, extensible format designed to support interoperability and storage of geospatial data.
SAIF dataset
SAIF has two major components that together define SAIFtalk. The first is the Class Syntax Notation (CSN), a data definition language used to define a dataset's schema. The second is the Object Syntax Notation (OSN), a data language used to represent the object data adhering to the schema. The CSN and OSN are contained in the same physical file, along with a directory at the beginning of the file. The use of ASCII text and a straightforward syntax for both CSN and OSN ensure that they can be parsed easily and understood directly by users and developers. A SAIF dataset, with a or extension, is compressed using the zip archive format.
Schema definition
SAIF defines 285 classes (including enumerations) in the Class Syntax Notation, covering the definitions of high-level features, geometric types, topological relationships, temporal coordinates and relationships, geodetic coordinate system components and metadata. These can be considered as forming a base schema. Using CSN, a user defines a new schema to describe the features in a given dataset. The classes belonging to the new schema are defined in CSN as subclasses of existing SAIF classes or as new enumerations.
A ForestStand::MySchema for example could be defined with attributes including age, species, etc. and with ForestStand::MySchema specified as a subclass of GeographicObject, a feature defined in the SAIF standard. All user defined classes must belong to a schema, one defined by the user or previously existing. Different schemas can exist in the same dataset and objects defined under one schema can reference those specified in another.
Inheritance
SAIF supports multiple inheritance, although common usage involved single inheritance only.
Object referencing
Object referencing can be used as a means of breaking up large monolithic structures. More significantly, it can allow objects to be defined only once and then referenced any number of times. A section of the geometry of the land-water interface could define part of a coastline as well as part of a municipal boundary and part of a marine park boundary. This geometric feature can be defined and given an object reference, which is then used when the geometry of the coastline, municipality and marine park are specified.
Multimedia
Multimedia objects can also be objects in a SAIF dataset and referenced accordingly. For example, image and sound files associated with a given location could be included.
Model transformations and related software applications
The primary advantage of SAIF was that it was inherently extensible following object oriented principles. This meant that data transfers from one GIS environment to another did not need to follow the lowest common denominator between the two systems. Instead, data could be extracted from a dataset defined by the first GIS, transformed into an intermediary, i.e., the semantically rich SAIF model, and from there transformed into a model and format applicable to the second GIS.
This notion of model to model transformation was deemed to be realistic only with an object oriented approach. It was recognized that scripts to carry out such transformations could in fact add information content. When Safe Software developed the Feature Manipulation Engine (FME), it was in large measure with the express purpose of supporting such transformations. The FMEBC was a freely available software application that supported a wide range of transformations using SAIF as the hub. The FME was developed as a commercial offering in which the intermediary could be held in memory instead of as a SAIF dataset.
History
The SAIF project was established as a means of addressing interoperability between different geographic information systems. Exchange formats of particular prominence at the time included DIGEST (Digital Geographic Information Exchange Standard) and SDTS (Spatial Data Transfer Specification, later accepted as the Spatial Data Transfer Standard). These were considered as too inflexible and difficult to use. Consequently, the Government of British Columbia decided to develop SAIF and to put it forward as a national standard in Canada.
SAIF became a Canadian national standard in 1993 with the approval of the Canadian General Standards Board. The last version of SAIF, published in January 1995, is designated as CGIS-SAIF Canadian Geomatics Interchange Standard: Spatial Archive and Interchange Format: Formal Definition (Release 3.2), issue CAN/CGSB-171.1-95, catalogue number P29-171-001-1995E.
The work on the SAIF modeling paradigm and the CSN classes was carried out principally by Mark Sondheim, Henry Kucera and Peter Friesen, all with the British Columbia government at the time. Dale Lutz and Don Murray of Safe Software developed the Object Syntax Notation and the Reader and Writer software that became part of the Feature Manipulation Engine.
SAIF was brought to the attention of Michael Stonebraker and Kenn Gardels of the University of California at Berkeley, and then to those working on the initial version of the Open Geospatial Interoperability Specification (OGIS), the first efforts of what became the Open Geospatial Consortium (OGC). A series of 18 submissions to the ISO SQL Multimedia working group also helped tie SAIF to the original ISO work on geospatial features.
Today SAIF is of historical interest only. It is significant as a precursor to the Geography Markup Language and as the formative element in the development of the widely used Feature Manipulation Engine.
See also
References
Sondheim, M., K. Gardels, and K. Buehler, 1999. GIS Interoperability. pp. 347–358. in Geographical Information Systems (Second Edition), Volume 1, edited by Paul A. Longley, Michael F. Goodchild, David J. Magurie and Davide W. Rhind.
Sondheim, M., P. Friesen, D. Lutz, and D. Murray. 1997. Spatial Archive and Interchange Format (SAIF). in Spatial Database Transfer Standards 2: Characteristics for Assessing Standards and Full Descriptions of the National and International Standards in the World. edited by Moellering H. and Hogan R. Elsevier, Netherlands. .
Surveys and Resource Mapping Branch. Spatial Archive and Interchange Format, Release 3.2, Formal Definition. 1995. (also Release 3.1 (1994); 3.0 (1993); 2.0, (1992); 1.0 (1991); and 0.1, (1990)) Surveys and Resource Mapping Branch, British Columbia Ministry of Environment, Lands and Parks. 258p. Also published by the Canadian General Standards Board, CAN/CGSB-171.1-95.
External links
Government of Canada Publications, CGIS-SAIF Release 3.2
SAIF Release 3.1
Safe Software, 2010, FME Readers and Writers, (Spatial Archive and Interchange Format, pp. 183 - 191)
Interoperability
GIS file formats
Open Geospatial Consortium | Spatial Archive and Interchange Format | Engineering | 1,477 |
2,915,421 | https://en.wikipedia.org/wiki/John%20Adams%20Whipple | John Adams Whipple (September 10, 1822 – April 10, 1891) was an American inventor and early photographer. He was the first in the United States to manufacture the chemicals used for daguerreotypes. He pioneered astronomical and night photography. He was a prize-winner for his extraordinary early photographs of the moon and he was the first to produce images of stars other than the sun. Among those was the star Vega and the Mizar-Alcor stellar sextuple system, which was thought to be a double star until 2009.
Biography
Whipple was born in Grafton, Massachusetts, to Jonathan and Melinda (Grout) Whipple. While a boy he was an ardent student of chemistry, and on the introduction of the daguerreotype process into the United States (1839–1840) he was the first to manufacture the necessary chemicals.
His health having become impaired through this work, he devoted his attention to photography. He made his first daguerreotype in the winter of 1840, "using a sun-glass for a lens, a candle box for a camera, and the handle of a silver spoon as a substitute for a plate." Over time he became a prominent daguerreotype portraitist in Boston. In addition to making portraits for the Whipple and Black studio, Whipple photographed important buildings in and around Boston, including the house occupied by General George Washington in 1775 and 1776 (photographed circa 1855, now in the Smithsonian).
Whipple married Elizabeth Mann (1819–1891) on May 12, 1847, in Boston.
Between 1847 and 1852 Whipple and astronomer William Cranch Bond, director of the Harvard College Observatory, used Harvard's Great Refractor telescope to produce images of the moon that are remarkable in their clarity of detail and aesthetic power. This was the largest telescope in the world at that time, and their images of the moon took the prize for technical excellence in photography at the great 1851 Crystal Palace Exhibition in London.
On the night of July 16–17, 1850, Whipple and Bond made the first daguerreotype of a star (Vega). In 1863, Whipple used electric lights to take night photographs of Boston Common.
Whipple was as prolific as an inventor as a photographer. He invented crayon daguerreotypes and crystallotypes (daguerreotypes on glass). With his partner or assistant, William Breed Jones, he developed the process for making paper prints from glass albumen negatives (crystallotypes). His American patents include Patent Number 6,056, the "Crayon Daguerreotype"; Patent Number 7,458, the "Crystallotype" (Credit shared with William B. Jones).
Whipple died suddenly, of pneumonia, on April 10, 1891, in Cambridge, Massachusetts, and was buried at Westborough, Worcester Co., Massachusetts.
Image gallery
Collections of his works
Boston Athenaeum
Boston Public Library
George Eastman House
Harvard University
Historic New England
Massachusetts Historical Society
Metropolitan Museum of Art
Smithsonian American Art Museum
References
19th-century American inventors
Astrophotographers
1822 births
1891 deaths
Pioneers of photography
Artists from Boston
People from Grafton, Massachusetts
19th-century American scientists
19th-century American photographers
Harvard College Observatory people | John Adams Whipple | Astronomy | 661 |
1,697,186 | https://en.wikipedia.org/wiki/DVB-H | DVB-H (digital video broadcasting - handheld) is one of three prevalent mobile TV formats. It is a technical specification for bringing broadcast services to mobile handsets. DVB-H was formally adopted as ETSI standard EN 302 304 in November 2004. The DVB-H specification (EN 302 304) can be downloaded from the official DVB-H website. For a few months from March 2008, DVB-H was officially endorsed by the European Union as the "preferred technology for terrestrial mobile broadcasting".
The major competitors of this technology were Qualcomm's MediaFLO system, the 3G cellular system based MBMS mobile-TV standard, and the ATSC-M/H format in the U.S.
, the recently introduced DVB-SH (Satellite to Handhelds) and anticipated DVB-NGH (Next Generation Handheld) in the future were possible enhancements to DVB-H, providing improved spectral efficiency and better modulation flexibility.
DVB-H struggled against resistance from network operators to include the technology in their subsidized handsets. In late 2016, it was acknowledged within the DVB Project newsletter that DVB-H and DVB-SH had been a commercial failure.
Ukraine was the last country with a nationwide broadcast in DVB-H, which began transitioning to DVB-T2 during 2019.
Technical explanation
DVB-H technology is a superset of the successful DVB-T (Digital Video Broadcasting - Terrestrial) system for digital terrestrial television, with additional features to meet the specific requirements of handheld, battery-powered receivers. In 2002 four main requirements of the DVB-H system were agreed: broadcast services for portable and mobile usage with 'acceptable quality'; a typical user environment, and so geographical coverage, as mobile radio; access to service while moving in a vehicle at high speed (as well as imperceptible handover when moving from one cell to another); and as much compatibility with existing digital terrestrial television (DVB-T), to allow sharing of network and transmission equipment.
DVB-H can offer a downstream channel at high data rates which can be used as standalone or as an enhancement of mobile telecommunication networks which many typical handheld terminals are able to access anyway.
Time slicing technology is employed to reduce power consumption for small handheld terminals. IP datagrams are transmitted as data bursts in small time slots. Each burst may contain up to two megabits of data (including parity bits). There are 64 parity bits for each 191 data bits, protected by Reed-Solomon codes. The front end of the receiver switches on only for the time interval when the data burst of a selected service is on air. Within this short period of time a high data rate is received which can be stored in a buffer. This buffer can either store the downloaded applications or playout live streams.
The achievable power saving depends on the relation of the on/off-time. If there are approximately ten or more bursted services in a DVB-H stream, the rate of the power saving for the front end could be up to 90%. DVB-H is a technical system which was carefully tested by the DVB-H Validation Task Force in the course of 2004 (see ETSI Technical Report TR 102 401). DVB-SH improved radio performances and can be seen as an evolution of DVB-H.
DVB-H is designed to work in the following bands:
VHF-III (170-230 MHz, or a portion of it)
UHF-IV/V (470-862 MHz, or a portion of it)
L (1.452-1.492 GHz)
DVB-SH now and DVB-NGH in the near future are expected to expand the supported bands.
DVB-H can coexist with DVB-T in the same multiplex.
DVB-IPDC
DVB-IPDC (DVB for IP Datacasting) is the specification for broadcasting mobile TV services based on Internet Protocol. DVB-IPDC is a set of systems layer specifications originally designed for use with the DVB-H physical layer, but that will ultimately be used as a higher layer for all DVB mobile TV systems, including DVB-SH, and indeed as a higher layer for any other IP capable system.
In short, with regard to mobile TV, these specifications define what is delivered, how it is delivered, how it is described, and how it is protected. They cover system architecture, use cases, DVB PSI/SI signalling, electronic service guide (ESG), content delivery protocols (CDP), and service purchase and protection (SPP). Almost all of these have now been published as formal ETSI standards.
The full set of DVB-IPDC specifications is available from dvb-h.org.
DVB-NGH
In 2007 a study mission was formed to investigate the options for a potential DVB-H2 successor to DVB-H, but the project was later shelved.
In November 2009, the DVB group made a 'call for technologies' for a new system (DVB-NGH - Next Generation Handheld) to update and replace the DVB-H standard for digital broadcasting to mobile devices. The schedule was for submissions to be closed in February 2010, the new ETSI standard published in 2013, and rollout of the first DVB-NGH devices from 2015.
DVB-SH
The DVB-SH (Satellite services to Handheld) standard was published in February 2007. Trials are ongoing in several European countries.
Service launches
In Finland, the license to operate a DVB-H network was awarded to Digita, but the service was closed in March 2012. In May 2006 they announced that they had signed a contract with Nokia to use its DVB-H platform for the service. The network was supposed to be launched on 1 December 2006, but disagreements regarding copyrights of the broadcast material have stalled the launch. Among the services available will be Voice TV and Kiss digital radio. Initially the network should cover 25% of the population with coverage area Helsinki, Oulu and Turku. Mobiili-TV started commercial services on 10 May 2007. The service was ended on 5.3.2012 due to lack of subscribers. Network operator Digita was granted to upgrade old DVB-H transmitters to next generation DVB-T2Lite technology which has ability to carry HD, SD and mobile-size picture for TV sets, laptops, pocket-PCs, mobile phones etc. simultaneously.
In India, Indian public broadcaster Prasar Bharti (also known as DD for Doordarshan) started DVB-H trials in various metropolitan areas to test the reception quality of the broadcast coverage. Moreover, DD is currently broadcasting 8 channels in the New Delhi.
In Italy, 3 Italia and Reti Radiotelevisive Digitali launched nationwide services in May 2006 (technology used), both Telecom Italia Mobile (TIM) and Mediaset in June 2006, Vodafone in December 2006. DVB OSF was the adopted security standard in this country. Since June 2008, 3 Italia has made some channels free on Dvb-h for all the users.
In Singapore, M1, StarHub, Singtel and Mediacorp launches for Beijing Olympics a nationwide DVB-H pilot adopting OMA BCAST SmartCard profile.
In the Philippines, SMART had launched its Mobile TV services, called MyTV. It is only available on the Nokia N92 and N77 mobile phone due to incompatibility of the current system with other security technologies such as DVB OSF, the one supported by all other handset manufacturers. However, with transition to OMA SmartCard Profile, it is yet to be available on other mobile phones models. This transition is not foreseen by end of 2008.
In the United States, Crown Castle had rolled out a DVB-H offering through a company they created called Modeo in 2006. It was initially offered in New York, but it was eventually terminated in 2007. Modeo was attempting to compete head-to-head with MediaFLO; which is both a Mobile TV company and Mobile TV Standard created by Qualcomm.
At the NAB trade show in April 2006, a second service launch was announced by SES Americom, Aloha Partners and Reti Radiotelevisive Digitali. Titled Hiwire Mobile Television, the service is set to begin trials in Las Vegas in Q4 2006. Hiwire owns two 6 MHz channels of spectrum at 700 MHz covering most of the country.
In Albania, DigitAlb and GrassValley launched the service on 20 December 2006, with free access up to the end of 2008. The package consists of 16 channels and covers 65% of the territory as of August 2007.
In Vietnam, VTC launched nationwide service on 21 December 2006. Similar issue to Smart in Philippines, the system was supporting only few Nokia handsets that limited the take-off of the service.
O2 Ireland commenced a trial in March 2007 with a single high site 1.2 kW transmitter at Three Rock covering the greater Dublin area.
In France, Spain and South Africa nationwide service launch is planned for 2008 or 2009, However, the unavailability of the UHF frequencies keeps on delaying services launches.
In Austria DVB-H is available since the start of UEFA Euro 2008 as result of a joint effort between Media Broad and the mobile operators 3 and Orange. The service was switched off at the end of 2010.
In Morocco, the service has been launched in May 2008.
In Switzerland DVB-H is available since the start of UEFA Euro 2008 thanks to Swisscom.
In Germany the future of DVB-H is still unknown due to continuous issues with the license and open questions about the business model, in particular which role operators play in it and if they are willing to do so.
In China, two companies have been issued licenses by the government, Shanghai Media Group and China Central Television. Trials are currently underway, services were expected to be launched before the 2008 Beijing Olympics. However, in this country, China Multimedia Mobile Broadcasting (CMMB) is the standard the most deployed in 2008.
In Malaysia, U Mobile, the fourth telecom operator and the country's newest 3G service provider announced commercial availability of a mobile broadcast TV service based on DVB-H technology before the end of 2007. The service will be called Mobile LiveTV.
Kenya has a DVB-H service, DStv Mobile, which was launched in Nairobi by South African company Digital Mobile TV. Consumers will receive a package of ten DStv channels through their mobile phones at a cost of Sh1,000 per month. The channels will include SuperSport Update, SuperSport 2, SuperSport 3, CNN International, Big Brother Africa and Africa Magic.
In Iran, DVB-H services began in Tehran in March 2008. The service brings ten television and four radio channels to mobile phones.
In Estonia, DVB-H service started testing phase in April 2008 with Levira and EMT, offering up to 15 TV-stations. Testing period ended in November 2009; the service never went into commercial use.
In South Africa Multichoice launched the public version of its DVB-H service, called DSTV Mobile on 1 December 2010. During September 2018, MultiChoice announced that its DStv Mobile service will end on 31 October 2018.
In the Netherlands, KPN launched a DVB-H service on 29 May 2008. The service offers 10 channels, 1 channel specially made for the service and 9 channels which are converted regular broadcasts. On 30 March 2011, KPN announced it was terminating the DVB-H service on 1 June 2011 because of a lack of new mobiles supporting the standard which resulted in fewer users. KPN still believes in mobile video, but in the form of video-on-demand using mobile internet connections.
In Jamaica, DVB-H service was launched by telecoms provider LIME in 2010 in the capital city Kingston and its Metropolitan Area. Further upgrades were made and the service was made available in the second city, Montego Bay.
Lack of acceptance
At the NAB show in April 2012, Peter Siebert of Europe's DVB Project Office said DVB-H did not succeed because so few devices were available, mainly because content producers would not subsidize them.
The competing technologies - like MediaFLO in the US - did not have any success, new sales were terminated in December 2010 and the allocated spectrum sold to AT&T.
These 1st generation transmissions protocols are not very spectrum efficient, but the fatal problem seems to be the lack of business model for mobile-TV. Very few have been willing to pay for the services. Even with a free mobile-tv signal, the daily use was often just a few minutes.
The Dutch KPN is an example. KPN quote lack of receiving devices, but even when KPN's service was initially free the average daily use was a mere 9 minutes per phone having a DVB-H receiver. (TV is typically watched 2–3 hours per day.)
As 4G/LTE became standard in most smartphones and used in many countries, it can provide the needed capacity for mobile TV within most people's data allotments. 4G/LTE has the potential of efficient multicast, which can increase the spectrum efficiency in broadcast type application to within a factor of maybe two compared to the newest (e.g. DVB-T2/T2Lite) broadcast protocols.
In late 2016, Phil Laven, the DVB Group's outgoing chairman, acknowledged within its official publication "DVB Scene" that DVB-H and DVB-SH had been a commercial failure and put this down to the reluctance of network operators to include the technology in subsidized handsets.
See also
ATSC-M/H U.S. mobile/handheld standard
Digital Audio Broadcasting (DAB and DAB+)
Digital Multimedia Broadcasting (DMB)
Digital Radio Mondiale (DRM)
Electronic program guide
ETSI Satellite Digital Radio (SDR)
E-VSB ATSC standard
Handheld projector
Integrated Services Digital Broadcasting (ISDB)
IP over DVB
DVB over IP
MediaFLO
Mobile DTV Alliance industry association
Mobile TV a term for the entire category
Multimedia Broadcast Multicast Service (MBMS)
OFDM system comparison table - providing technical details
OneSeg
Spectral efficiency comparison table
WiMAX
Digital Video Broadcasting(DVB)
References
External links
DVB-H.org - Official DVB-H website of the DVB Project, includes extensive information on trials, technical specifications for download, a detailed FAQ, and an indication of DVB-H related products
TR-47 Terrestrial and Non Mobile Multimedia Multicast - TIA Terrestrial and Non Terrestrial Mobile Multimedia Multicast Standards based on DVB-H Technology
DVB Standards & BlueBooks
EBU Tech 3327 - Network Aspects of DVB-H and T-DMB
A 10-page article "DVB-H — the emerging standard for mobile data communication" from the European Broadcasting Union (EBU) Technical Review
A collection of articles on DVB (including DVB-H & DVB-SH) in the archive of EBU Technical Review
DVB-H systems | VECTOR DVB-H systems in use
"Mobile DTV Alliance: Digital Video Broadcast for Handheld Devices; North American Implementation Guidelines Release 1.0" - DVB-H Implementation Guidelines for North America
DVB-T2-Lite: a new option for mobile broadcasting December 2011
DVB-NGH: introduction
Broadcast engineering
Digital Video Broadcasting
ETSI
Mobile telephone broadcasting
Mobile television
Television transmission standards | DVB-H | Technology,Engineering | 3,229 |
2,150,535 | https://en.wikipedia.org/wiki/PGPfone | PGPfone was a secure voice telephony system developed by Philip Zimmermann in 1995. The PGPfone protocol had little in common with Zimmermann's popular PGP email encryption package, except for the use of the name. It used ephemeral Diffie-Hellman protocol to establish a session key, which was then used to encrypt the stream of voice packets. The two parties compared a short authentication string to detect a Man-in-the-middle attack, which is the most common method of wiretapping secure phones of this type. PGPfone could be used point-to-point (with two modems) over the public switched telephone network, or over the Internet as an early Voice over IP system.
In 1996, there were no protocol standards for Voice over IP. Ten years later, Zimmermann released the successor to PGPfone, Zfone and ZRTP, a newer and secure VoIP protocol based on modern VoIP standards. Zfone builds on the ideas of PGPfone.
According to the MIT PGPfone web page, "MIT is no longer distributing PGPfone. Given that the software has not been maintained since 1997, we doubt it would run on most modern systems."
See also
Comparison of VoIP software
Nautilus (secure telephone)
PGP word list
Secure telephone
References
External links
PGPfone homepage on PGPi
Old PGPfone homepage on MIT
PGPfone sources, modified to build on modern systems
Discontinued software
Secure communication
Cryptographic software
VoIP software | PGPfone | Mathematics | 317 |
70,573,417 | https://en.wikipedia.org/wiki/Leray%E2%80%93Schauder%20degree | In mathematics, the Leray–Schauder degree is an extension of the degree of a base point preserving continuous map between spheres or equivalently to boundary-sphere-preserving continuous maps between balls to boundary-sphere-preserving maps between balls in a Banach space , assuming that the map is of the form where is the identity map and is some compact map (i.e. mapping bounded sets to sets whose closure is compact).
The degree was invented by Jean Leray and Juliusz Schauder to prove existence results for partial differential equations.
References
Topology | Leray–Schauder degree | Physics,Mathematics | 115 |
58,071,400 | https://en.wikipedia.org/wiki/Marlin%20%28firmware%29 | Marlin is open source firmware originally designed for RepRap project FDM (fused deposition modeling) 3D printers using the Arduino platform.
Marlin supports many different types of 3D printing robot platforms, including basic Cartesian, Core XY, Delta, and SCARA printers, as well as some other less conventional designs like Hangprinter and Beltprinter. In addition to 3D printers, Marlin is generally adaptable to any machine requiring control and interaction. It has been used to drive SLA and SLS 3D printers, custom CNC mills, laser engravers (or laser beam machining), laser cutters, vinyl cutters, pick-and-place machines, foam cutters, and egg painting robots.
History
Marlin was first created in 2011 for the RepRap and Ultimaker printers by combining elements from the open source Grbl and Sprinter projects. Development continued at a slow pace while gaining in popularity and acceptance as a superior alternative to the other available firmware. By 2015, companies were beginning to introduce commercial 3D printers with Marlin pre-installed and contributing their improvements to the project. Early machines included the Ultimaker 1, the TAZ series by Aleph Objects and the Prusa i3 by Prusa Research.
By 2018 manufacturers had begun to favor boards with more powerful and efficient ARM processors, often at a lower cost than the AVR boards they supplant. After extensive refactoring Marlin 2.0 was officially released in late 2019 with full support for 32-bit ARM-based controller boards through a lightweight extensible hardware access layer. While Marlin 1.x had only supported 8-bit AVR (e.g., ATMega) and ATSAM3X8E (Due) platforms, the HAL added ATSAMD51 (Grand Central), Espressif ESP32, NXP LPC176x, and STMicro STM32. Marlin also acquired HAL code to run natively on Linux, Mac, and Windows, but only within a simulation for debugging purposes.
As of October 2022, Marlin was still under active development and continues to be very popular, claiming to be "the most widely used 3D printing firmware in the world." Some of the most successful companies using Marlin today are Ultimaker, LulzBot, Prusa Research, and Creality.
Marlin firmware is not alone in the field of open source 3D printer firmware. Other popular open source firmware offerings include RepRap Firmware by Duet3D, Buddy Firmware by Prusa Research, and Klipper by the Klipper Foundation. These alternatives take advantage of extra processing power to offer advanced features like input shaping, which has only recently been added to Marlin (only limited version; Marlin does not support hardware accelerometers which is the best way to fully take advantage of input shaping).
Technical
Marlin firmware is hosted on GitHub, where it is developed and maintained by a community of contributors. Marlin's lead developer is Scott Lahteine (aka Thinkyhead), an independent shareware and former Amiga game developer who joined the project in 2014. His work is entirely supported by crowdfunding.
Marlin is written in optimized C++ for the Arduino API in a mostly embedded-C++ style, which avoids the use of dynamic memory allocation. The firmware can be built with Arduino IDE, PlatformIO, or Auto Build Marlin extension for Visual Studio Code. The latter method is recommended because it is very easy but it only being an Visual Studio Code extension requires Visual Studio Code to be installed on the building system first.
Once the firmware has been compiled from C++ source code; it is installed and runs on a mainboard with onboard components and general-purpose I/O pins to control and communicate with other components. For control the firmware receives input from a USB port or attached media in the form of G-code commands instructing the machine what to do. For example, the command "G1 X10" tells the machine to perform a smooth linear move of the X axis to position 10. The main loop manages all of the machine's real-time activities like commanding the stepper motors through stepper drivers, controlling heaters, sensors, and lights, managing the display and user interface, etc.
License
Marlin is distributed under the GPL license which requires that organizations and individuals share their source code if they distribute the firmware in binary form, including firmware that comes pre-installed on the mainboard. Vendors have occasionally failed to comply with the license, leading to some distributors dropping their products.
In 2018 the US distributor Printed Solid ended its relationship with Creality due to GPL violations and quality issues.
As of 2022, some vendors are still spotty in their compliance, deflecting customer requests for the source code for an extended period or in perpetuity after a product release.
Usage and license compliance
Marlin firmware is used by several 3D printer manufacturers, most of which are fully compliant with the license. Compliance is tracked by Tim Hoogland of TH3D Studio, et. al.. The following table may be out of date by the time you read this.
See also
RepRap Project
G-code
RAMPS
3D printing
Applications of 3D printing
List of 3D printer manufacturers
List of 3D printing software
Comparison of 3D printers
3D printing processes
3D Manufacturing Format
3D printing speed
Fused filament fabrication
Construction 3D printing
References
External links
Marlin official website
Marlin GitHub repository
Marlin Patreon page
How it's Made: The Marlin Firmware!, an interview with Scott Laheine, YouTube
3D printing
Firmware | Marlin (firmware) | Engineering,Biology | 1,183 |
27,942,602 | https://en.wikipedia.org/wiki/Camelopardalis%20in%20Chinese%20astronomy | According to traditional Chinese uranography, the modern constellation Camelopardalis is located in Three Enclosures (三垣, Sān Yuán)
The name of the western constellation in modern Chinese is 鹿豹座 (lù bào zuò), meaning "the leopard-deer constellation".
Stars
The map of Chinese constellation in constellation Camelopardalis area consists of :
See also
Traditional Chinese star names
Chinese constellations
References
External links
Camelopardalis – Chinese associations
香港太空館研究資源
中國星區、星官及星名英譯表
天象文學
台灣自然科學博物館天文教育資訊網
中國古天文
中國古代的星象系統
Astronomy in China
Camelopardalis | Camelopardalis in Chinese astronomy | Astronomy | 157 |
49,338,480 | https://en.wikipedia.org/wiki/Apache%20SystemDS | Apache SystemDS (Previously, Apache SystemML) is an open source ML system for the end-to-end data science lifecycle.
SystemDS's distinguishing characteristics are:
Algorithm customizability via R-like and Python-like languages.
Multiple execution modes, including Standalone, Spark Batch, Spark MLContext, Hadoop Batch, and JMLC.
Automatic optimization based on data and cluster characteristics to ensure both efficiency and scalability.
History
SystemML was created in 2010 by researchers at the IBM Almaden Research Center led by IBM Fellow Shivakumar Vaithyanathan. It was observed that data scientists would write machine learning algorithms in languages such as R and Python for small data. When it came time to scale to big data, a systems programmer would be needed to scale the algorithm in a language such as Scala. This process typically involved days or weeks per iteration, and errors would occur translating the algorithms to operate on big data. SystemML seeks to simplify this process. A primary goal of SystemML is to automatically scale an algorithm written in an R-like or Python-like language to operate on big data, generating the same answer without the error-prone, multi-iterative translation approach.
On June 15, 2015, at the Spark Summit in San Francisco, Beth Smith, General Manager of IBM Analytics, announced that IBM was open-sourcing SystemML as part of IBM's major commitment to Apache Spark and Spark-related projects. SystemML became publicly available on GitHub on August 27, 2015 and became an Apache Incubator project on November 2, 2015. On May 17, 2017, the Apache Software Foundation Board approved the graduation of Apache SystemML as an Apache Top Level Project.
Key technologies
The following are some of the technologies built into the SystemDS engine.
Compressed Linear Algebra for Large Scale Machine Learning
Declarative Machine Learning Language
Examples
Principal Component Analysis
The following code snippet does the Principal component analysis of input matrix , which returns the and the .# PCA.dml
# Refer: https://github.com/apache/systemds/blob/master/scripts/algorithms/PCA.dml#L61
N = nrow(A);
D = ncol(A);
# perform z-scoring (centering and scaling)
A = scale(A, center==1, scale==1);
# co-variance matrix
mu = colSums(A)/N;
C = (t(A) %*% A)/(N-1) - (N/(N-1))*t(mu) %*% mu;
# compute eigen vectors and values
[evalues, evectors] = eigen(C);
Invocation script
spark-submit SystemDS.jar -f PCA.dml -nvargs INPUT=INPUT_DIR/pca-1000x1000 \
OUTPUT=OUTPUT_DIR/pca-1000x1000-model PROJDATA=1 CENTER=1 SCALE=1
Database functions
DBSCAN clustering algorithm with Euclidean distance.X = rand(rows=1780, cols=180, min=1, max=20)
[indices, model] = dbscan(X = X, eps = 2.5, minPts = 360)
Improvements
SystemDS 2.0.0 is the first major release under the new name. This release contains a major refactoring, a few major features, a large number of improvements and fixes, and some experimental features to better support the end-to-end data science lifecycle. In addition to that, this release also removes several features that are not up date and outdated.
New mechanism for DML-bodied (script-level) builtin functions, and a wealth of new built-in functions for data preprocessing including data cleaning, augmentation and feature engineering techniques, new ML algorithms, and model debugging.
Several methods for data cleaning have been implemented including multiple imputations with multivariate imputation by chained equations (MICE) and other techniques, SMOTE, an oversampling technique for class imbalance, forward and backward NA filling, cleaning using schema and length information, support for outlier detection using standard deviation and inter-quartile range, and functional dependency discovery.
A complete framework for lineage tracing and reuse including support for loop deduplication, full and partial reuse, compiler assisted reuse, several new rewrites to facilitate reuse.
New federated runtime backend including support for federated matrices and frames, federated builtins (transform-encode, decode etc.).
Refactor compression package and add functionalities including quantization for lossy compression, binary cell operations, left matrix multiplication. [experimental]
New python bindings with supports for several builtins, matrix operations, federated tensors and lineage traces.
Cuda implementation of cumulative aggregate operators (cumsum, cumprod etc.)
New model debugging technique with slice finder.
New tensor data model (basic tensors of different value types, data tensors with schema) [experimental]
Cloud deployment scripts for AWS and scripts to set up and start federated operations.
Performance improvements with parallel sort, gpu cum agg, append cbind etc.
Various compiler and runtime improvements including new and improved rewrites, reduced Spark context creation, new eval framework, list operations, updated native kernel libraries to name a few.
New data reader/writer for json frames and support for sql as a data source.
Miscellaneous improvements: improved documentation, better testing, run/release scripts, improved packaging, Docker container for systemds, support for lambda expressions, bug fixes.
Removed MapReduce compiler and runtime backend, pydml parser, Java-UDF framework, script-level debugger.
Deprecated ./scripts/algorithms, as those algorithms gradually will be part of SystemDS builtins.
Contributions
Apache SystemDS welcomes contributions in code, question and answer, community building, or spreading the word. The contributor guide is available at https://github.com/apache/systemds/blob/main/CONTRIBUTING.md
See also
Comparison of deep learning software
References
External links
Apache SystemML website
IBM Research - SystemML
Q & A with Shiv Vaithyanathan, Creator of SystemML and IBM Fellow
A Universal Translator for Big Data and Machine Learning
SystemML: Declarative Machine Learning at Scale presentation by Fred Reiss
SystemML: Declarative Machine Learning on MapReduce
Hybrid Parallelization Strategies for Large-Scale Machine Learning in SystemML
SystemML's Optimizer: Plan Generation for Large-Scale Machine Learning Programs
IBM's SystemML machine learning system becomes Apache Incubator project
IBM donates machine learning tech to Apache Spark open source community
IBM's SystemML Moves Forward as Apache Incubator Project
Cluster computing
Data mining and machine learning software
Hadoop
SystemML
Software using the Apache license
Java platform
Big data products
2015 software | Apache SystemDS | Technology | 1,458 |
54,632,469 | https://en.wikipedia.org/wiki/Halorubrum%20lacusprofundi | Halorubrum lacusprofundi is a rod-shaped, halophilic Archaeon in the family of Halorubraceae. It was first isolated from Deep Lake in Antarctica in the 1980s.
Genome
Several strains of H. lacusprofundi have been discovered. The genome sequencing of the strain ACAM 32 was completed in 2008. The organism's genome consists of two circular chromosomes and a single circular plasmid. Chromosome I contains 2,735,295 base pairs encoding 2,801 genes and chromosome II contains 525,943 base pairs encoding 522 genes. The single plasmid contains 431,338 base pairs encoding 402 genes. At least one strain of H. lacusprofundi (R1S1) contains a plasmid (pR1SE) that enables horizontal gene transfer, which takes place via a mechanism that uses vesicle-enclosed virus-like particles.
Research
Its β-galactosidase enzyme has been extensively studied to understand how proteins function in low-temperature, high-saline environments.
References
Euryarchaeota | Halorubrum lacusprofundi | Biology | 230 |
3,285,287 | https://en.wikipedia.org/wiki/Bushy-tailed%20opossum | The bushy-tailed opossum (Glironia venusta) is an opossum from South America. It was first described by English zoologist Oldfield Thomas in 1912. It is a medium-sized opossum characterized by a large, oval, dark ears, fawn to cinnamon coat with a buff to gray underside, grayish limbs, and a furry tail. Little is known of the behavior of the bushy-tailed opossum; less than 25 specimens are known. It appears to be arboreal (tree-living), nocturnal (active mainly at night) and solitary. The diet probably comprises insects, eggs and plant material. This opossum has been captured from heavy, humid, tropical forests; it has been reported from Bolivia, Brazil, Colombia, Ecuador and Peru. The IUCN classifies it as least concern.
Taxonomy and etymology
The bushy-tailed opossum is the sole member of Glironia, and is placed in the family Didelphidae. It was first described by English zoologist Oldfield Thomas in 1912. Earlier, Glironia was considered part of the subfamily Didelphinae. A 1955 revision of marsupial phylogeny grouped Caluromys, Caluromysiops, Dromiciops (monito del monte) and Glironia under a single subfamily, Microbiotheriinae, noting the dental similarities among these. A 1977 study argued that these similarities are the result of convergent evolution, and placed Caluromys, Caluromysiops and Glironia in a new subfamily, Caluromyinae. In another similar revision, the bushy-tailed opossum was placed in its own subfamily, Glironiinae.
The cladogram below, based on a 2016 study, shows the phylogenetic relationships of the bushy-tailed opossum.
The generic name is a compound of the Latin ("dormouse") and Greek suffix -ia (pertains to "quality" or "condition"). The specific name, venusta, means "charming" in Latin.
Description
The bushy-tailed opossum is a medium-sized opossum characterized by a large, oval, dark ears, fawn to cinnamon coat with a buff to gray underside, grayish limbs, and, as its name suggests, a furry tail. The face is marked by two bold, dark stripes extending from either side of the nose through the eyes to the back of the ears. These stripes are separated by a thinner grayish white band, that runs from the midline of the nose to the nape of the neck. The texture of hairs ranges from soft to woolly; the hairs on the back measure . Five nipples can be seen on the abdomen; it lacks a marsupium. The tail, long, becomes darker and less bushy towards the tip. Basically the same in color as the coat, the tip may be completely white or have diffuse white hairs.
The head-and-body length is typically between , the hindfeet measure and the ears are long. It weighs nearly . The dental formula is – typical of all didelphids. Canines and molars are poorly developed. Differences from Marmosa species (mouse opossums) include smaller ears, longer and narrower rostrum, and greater erectness in canines. The monito del monte has a similar bushy tail. A study of the male reproductive system noted that the bushy-tailed opossum has two pairs of bulbourethral glands, as in Caluromys and Gracilinanus, but unlike other didelphids that have three pairs. The urethral grooves of the glans penis end near the tips.
Ecology and behavior
Little is known of the behavior of the bushy-tailed opossum. Less than 25 specimens are known. A study noted the morphological features of the opossum that could allow for powerful movements during locomotion, and deduced that it is arboreal (tree-living). It appears to be solitary and nocturnal (active mainly at night). An individual was observed running through and leaping over vines, in a manner typical of opossums, probably hunting for insects. Its diet may be similar to that of the mouse opossums – insects, eggs and plant material.
Distribution and status
The bushy-tailed opossum has been captured from heavy, humid, tropical forests, and has not been recorded outside forests. It occurs up to an altitude of above sea level. The range has not been precisely determined; specimens have been collected from regions of Bolivia, Brazil, Colombia, Ecuador and Peru. The IUCN classifies the bushy-tailed opossum as least concern, given its wide distribution and presumably large population. The major threats to its survival are deforestation and human settlement.
References
Further reading
Opossums
Marsupials of Bolivia
Marsupials of Brazil
Marsupials of Colombia
Marsupials of Ecuador
Marsupials of Peru
Mammals described in 1912
EDGE species
Taxa named by Oldfield Thomas | Bushy-tailed opossum | Biology | 1,039 |
74,284,664 | https://en.wikipedia.org/wiki/9855 | 9855 (nine thousand eight hundred fifty-five) is an odd, composite, four-digit number. The number 9855 is the magic constant of an n × n normal magic square as well as n-Queens Problem for n = 27. It can be expressed as the product of its prime factors:
9855 is also the Magic constant of a Magic square of order 27. In a magic square, the magic constant is the sum of numbers in each row, column, and diagonal, which is the same. For magic squares of order n, the magic constant is given by the formula .
The magic constant 9855 for the magic square of order 27 can be calculated as follows:
This square contains the numbers 1 to 729, with 365 in the center. The square consists of 9 nine power magic squares. It has been noted that the number of days in 27 years (365 days per year) is 9855, the constant of the larger square. This was first discovered and solved by ancient Greeks: Aristotle understood this magic square, but it is noted from numeris Platonics nihil obscuris that Cicero was unable to solve it. The 27 years as alluded to by the square was mentioned in reference to Greek generation time.
References
Integers | 9855 | Mathematics | 256 |
52,279,775 | https://en.wikipedia.org/wiki/Dependent%20component%20analysis | Dependent component analysis (DCA) is a blind signal separation (BSS) method and an extension of Independent component analysis (ICA). ICA is the separating of mixed signals to individual signals without knowing anything about source signals. DCA is used to separate mixed signals into individual sets of signals that are dependent on signals within their own set, without knowing anything about the original signals. DCA can be ICA if all sets of signals only contain a single signal within their own set.
Mathematical representation
For simplicity, assume all individual sets of signals are the same size, k, and total N sets. Building off the basic equations of BSS (seen below) instead of independent source signals, one has independent sets of signals, s(t) = ({s1(t),...,sk(t)},...,{skN-k+1(t)...,skN(t)})T, which are mixed by coefficients A=[aij]εRmxkN that produce a set of mixed signals, x(t)=(x1(t),...,xm(t))T. The signals can be multidimensional.
The following equation BSS separates the set of mixed signals, x(t), by finding and using coefficients, B=[Bij]εRkNxm, to separate and get the set of approximation of the original signals, y(t)=({y1(t),...,yk(t)},...,{ykN-k+1(t)...,ykN(t)})T.
Methods
Sub-Band Decomposition ICA (SDICA) is based on the fact that wideband source signals are dependent, but that other subbands are independent. It uses an adaptive filter by choosing subbands using a minimum of mutual information (MI) to separate mixed signals. After finding subband signals, ICA can be used to reconstruct, based on subband signals, by using ICA. Below is a formula to find MI based on entropy, where H is entropy.
References
Signal processing | Dependent component analysis | Technology,Engineering | 459 |
28,066,214 | https://en.wikipedia.org/wiki/Licensed%20behavior%20analyst | A licensed behavior analyst is a type of behavioral health professional in the United States. They have at least a master's degree, and sometimes a doctorate, in behavior analysis or a related field. Behavior analysts apply radical behaviorism, or applied behavior analysis, to people.
Defining the scope of practice
The Behavior Analyst Certification Board (BACB) defines behavior analysis as follows:
The analysis. The experimental analysis of behavior (EAB) is the basic science of this field and has over many decades accumulated a substantial and well-respected research literature. This literature provides the scientific foundation for applied behavior analysis (ABA), which is both an applied science that develops methods of changing behavior and a profession that provides services to meet diverse behavioral needs. Briefly, professionals in applied behavior analysis engage in the specific and comprehensive use of principles of learning, including operant and respondent learning, in order to address behavioral needs of widely varying individuals in diverse settings. Examples of these applications include: building the skills and achievements of children in school settings; enhancing the development, abilities, and choices of children and adults with different kinds of disabilities; and augmenting the performance and satisfaction of employees in organizations and businesses.
As the above suggests, behavior analysis is based on the principles of operant and respondent conditioning. This places behavior analysis as one of the dominant models of behavior management, behavioral engineering and behavior therapy. Behavior analysis is an active, environmental based approach and some behavior analytic procedures are considered highly restrictive (see least restrictive environment). For example, these service may make access to preferred items contingent on performance. This has led to abuses in the past, in particular where punishment programs have been involved. In addition, failure to be an independent profession often leads behavior analysts and other behavior modifiers to have their ethical codes supplanted by those of other professions. For example, a behavior analyst working in the hospital setting might design a token economy, a form of contingency management. He may desire to meet his ethical obligation to make the program habilitative and in the clients' best long-term interest. The physicians and nurses in the hospital who supervise him may decide that the token economy should instead create order in the nursing routines so clients get their medication quickly and efficiently. Instead of the ethical code of the Behavior Analysis Certification Board and the Association for Behavior Analysis International's position that those receiving treatment have a right to effective treatment and a right to effective education. In addition, failure on the part of a behavior analyst to adequately supervise his or her workers could lead to abuse. Finally, misrepresentations of the field and historical problems between academics has led to frequent calls to professionalize behavior analysis.
In general, there is wide support within the profession for licensure.
Range of populations worked with
The professional practice of behavior analysis ranges from treatment of individuals with autism and developmental disabilities to behavioral coaching and behavioral psychotherapy. In addition to treatment of mental health problems and corrections, the professional practice of behavior analysis includes organizational behavioral management, behavioral safety and even maintaining the behavioral health of astronauts while within and beyond Earth's orbit.
Certification
The Behavior Analyst Certification Board (BACB) and the Qualified Applied Behavior Analysis Credentialing Board (QABA®) offers a technical certificates in behavior analysis. These certifications are internationally recognized. These certifications states the level of training and requires an exam to show a minimum level of competence to call oneself a board certified behavior analyst (BCBA) or qualified behavior analyst (QBA). Certification came about because of many ethical issues with behavioral interventions being delivered including the use of aversive and humiliating treatments in the name of behavior modification. American psychological association offers a diplomate (post Ph.D. and licensed certification) in behavioral psychology.
The meaning of certification
BACB and QABA are a private organizations without governmental powers to regulate behavior analytic practice. While the BACB and QABA certifications means that candidates have satisfied entry-level requirements in behavior analytic training, certificants may require a government license for independent practice when treating behavioral health or medical problems. Licensed certificants must operate within the scope of their license and must practice within their areas of expertise. Where the government regulates behavior analytic services unlicensed certificants must be supervised by a licensed professional and operate within the scope of their supervisor's license when treating disorders. Unlicensed certificants who provide behavior analytic training for educational or optimal performance purposes do not require licensed supervision. Where the government does not regulate the treatment of medical or psychological disorders certificants should practice in accord with the laws of their state, province, or country. All certificants must practice within their personal areas of expertise.
Licensure
Recently, a move has occurred to license behavior analysts. Licensure's purpose is to protect the public from employing unqualified practitioners.
The model licensing act states that a person is a behavior analyst by training and experience. The person seeking licensure must have mastered behavior analysis by achieving a master's degree in behavior analysis or related subject matter. Like all other master level licensed professions (see counseling and licensed professional counselor) the model act sets the standard for a master's degree. This requirement states that the person has achieved textbook knowledge of behavior analysis which can be then tested through the exam offered by the Behavior Analyst Certification Board or the one offered by the QABA. It also requires an internship in which a behavior analysts works under another master or Ph.D. level behavior analyst for a period of one year (750 hours) with at least two hours/week of supervision. Finally, those 750 hours are considered tutelage time. After that, the behavior analyst must engage in supervised practice under a behavior analyst for a period of another 2 years (2,000 hours).
Once this process is complete, the person applies to a state board who ensures that he or she has indeed met the above conditions. Once the person is licensed public protection is still monitored by the licensing board, which makes sure that the person receives sufficient ongoing education, and the licensing board investigates ethical complaints. A licensed behavior analyst would have equal training, knowledge, skills and abilities in their discipline as would a mental health counselor or marriage and family therapist in their discipline. In February 2008, Indiana, Arizona, Massachusetts, Vermont, Oklahoma and other states now have legislation pending to create licensure for behavior analysts. Pennsylvania was the first state in 2008 to license "behavior specialists" to cover behavior analysts. Arizona, less than three weeks later, became the first state to license "behavior analysts." Other states such as New York, Nevada and Wisconsin also have passed behavior analytic licensure.
Other countries
Recently licensure efforts have occurred in Canada for behavior analysts.
Professional organizations
The Association for Behavior Analysis International has a special interest group for practitioner issues, which focuses on key issues related to licensing behavior analysts. In addition, they have a practice board and a policy board to handle legislative issues ABA:I. Finally, the association has recently put out its own model licensing act for behavior analysts.
Association for behavior analysis international serves as the core intellectual home for behavior analysts. The Association for Behavior Analysis International sponsors 2 conferences per year – one in the U.S. and one international.
See also
Professional practice of behavior analysis
References
Mental health occupations
Mental health in the United States
Applied psychology
Behavior modification
Cognitive behavioral therapy
Behaviorism
de:Applied Behavior Analysis
fr:Analyse du comportement appliquée
nl:Toegepaste gedragsanalyse
pt:Profissional de saúde mental | Licensed behavior analyst | Biology | 1,539 |
23,205,672 | https://en.wikipedia.org/wiki/Maslach%20Burnout%20Inventory | The Maslach Burnout Inventory (MBI) is a psychological assessment instrument comprising 22 symptom items pertaining to occupational burnout. The original form of the MBI was developed by Christina Maslach and Susan E. Jackson with the goal of assessing an individual's experience of burnout. As underlined by Schaufeli (2003), a major figure of burnout research, "the MBI is neither grounded in firm clinical observation nor based on sound theorising. Instead, it has been developed inductively by factor-analysing a rather arbitrary set of items" (p. 3). The instrument takes 10 minutes to complete. The MBI measures three dimensions of burnout: emotional exhaustion, depersonalization, and personal accomplishment.
Following the publication of the MBI in 1981, new versions of the MBI were gradually developed to apply to different groups and different settings. There are now five versions of the MBI: Human Services Survey (MBI-HSS), Human Services Survey for Medical Personnel (MBI-HSS (MP)), Educators Survey (MBI-ES), General Survey (MBI-GS), and General Survey for Students (MBI-GS [S]).
The psychometric properties of the MBI have proved somewhat problematic (e.g., in terms of factorial validity and measurement invariance), casting doubt on the conceptual coherence and syndromal cohesiveness of burnout. Two meta-analyses of primary studies that report sample-specific reliability estimates for the three MBI scales found that emotional exhaustion scale has good enough reliability; however, reliability is problematic regarding depersonalization and personal accomplishment scales. Research based on the job demands-resources (JD-R) model indicates that the emotional exhaustion, the core of burnout, is directly related to demands and inversely related to the extensiveness of resources. The MBI has been validated for human services populations, educator populations, and general work populations.
The MBI is often combined with the Areas of Worklife Survey (AWS) to assess levels of burnout and worklife context.
Uses of the Maslach Burnout Inventory
Source:
Assess professional burnout in human service, education, business, and government professions.
Assess and validate the three-dimensional structure of burnout.
Understand the nature of burnout for developing effective interventions.
Maslach Burnout Inventory Scales
Emotional Exhaustion (EE)
The 9-item Emotional Exhaustion (EE) scale measures feelings of being emotionally overextended and exhausted by one's work. Higher scores correspond to greater experienced burnout. This scale is used in the MBI-HSS, MBI-HSS (MP), and MBI-ES versions.
The MBI-GS and MBI-GS (S) use a shorter 5-item version of this scale called "Exhaustion".
Depersonalization (DP)
The 5-item Depersonalization (DP) scale measures an unfeeling and impersonal response toward recipients of one's service, care, treatment, or instruction. Higher scores indicate higher degrees of experienced burnout. This scale is used in the MBI-HSS, MBI-HSS (MP) and the MBI-ES versions.
Personal Accomplishment (PA)
The 8-item Personal Accomplishment (PA) scale measures feelings of competence and successful achievement in one's work. Lower scores correspond to greater experienced burnout. This scale is used in the MBI-HSS, MBI-HSS (MP), and MBI-ES versions.
Cynicism
The 5-item Cynicism scale measures an indifference or a distance attitude towards one's work. It is akin to the Depersonalization scale. The cynicism measured by this scale is a coping mechanism for distancing oneself from exhausting job demands. Higher scores correspond to greater experienced burnout. This scale is used in the MBI-GS and MBI-GS (S) versions.
Professional Efficacy
The 6-item Professional Efficacy scale measures feelings of competence and successful achievement in one's work. It is akin to the Personal Accomplishment scale. This sense of personal accomplishment emphasizes effectiveness and success in having a beneficial impact on people. Lower scores correspond to greater experienced burnout. This scale is used in the MBI-GS and MBI-GS (S) versions.
Forms of the Maslach Burnout Inventory
Source:
The MBI has five validated forms composed of 16–22 items to measure an individual's experience of burnout.
Maslach Burnout Inventory - Human Services Survey (MBI-HSS)
The MBI-HSS consists of 22 items and is the original and most widely used version of the MBI. It was designed for professionals in human services and is appropriate for respondents working in a diverse array of occupations, including nurses, physicians, health aides, social workers, health counselors, therapists, police, correctional officers, clergy, and other fields focused on helping people live better lives by offering guidance, preventing harm, and ameliorating physical, emotional, or cognitive problems. The MBI-HSS scales are Emotional Exhaustion, Depersonalization, and Personal Accomplishment.
Maslach Burnout Inventory - Human Services Survey for Medical Personnel (MBI-HSS (MP))
The MBI-HSS (MP) is a variation of the MBI-HSS adapted for medical personnel. The most notable alteration is this form refers to "patients" instead of "recipients". The MBI-HSS (MP) scales are Emotional Exhaustion, Depersonalization, and Personal Accomplishment.
Maslach Burnout Inventory - Educators Survey (MBI-ES)
The MBI-ES consists of 22 items and is a version of the original MBI for use with educators. It was designed for teachers, administrators, other staff members, and volunteers working in any educational setting. This form was formerly known as MBI-Form Ed. The MBI-ES scales are Emotional Exhaustion, Depersonalization, and Personal Accomplishment.
Maslach Burnout Inventory - General Survey (MBI-GS)
The MBI-GS consists of 16 items and is designed for use with occupational groups other than human services and education, including those working in jobs such as customer service, maintenance, manufacturing, management, and most other professions. The MBI-GS scales are Exhaustion, Cynicism, and Professional Efficacy.
Maslach Burnout Inventory - General Survey for Students (MBI-GS (S))
The MBI-GS (S) is an adaptation of the MBI-GS designed to assess burnout in college and university students. It is available for use, but its psychometric properties are not yet documented. The MBI-GS (S) scales are Exhaustion, Cynicism, and Professional Efficacy.
Scoring the Maslach Burnout Inventory
All MBI items are scored using a 7 level frequency ratings from "never" to "daily." The MBI has three component scales: emotional exhaustion (9 items), depersonalization (5 items) and personal achievement (8 items). Each scale measures its own unique dimension of burnout. Scales should not be combined to form a single burnout scale. Importantly, the recommendation of examining the three dimensions of burnout separately implies that, in practice, the MBI is a measure of three independent constructs - emotional exhaustion, depersonalization, and personal accomplishment - rather than a measure of burnout. Maslach, Jackson, and Leiter described item scoring from 0 to 6. There are score ranges that define low, moderate and high levels of each scale based on the 0-6 scoring.
The 7-level frequency scale for all MBI scales is as follows:
Never (0)
A few times a year or less (1)
Once a month or less (2)
A few times a month (3)
Once a week (4)
A few times a week (5)
Every day (6)
Examples of use
The Maslach Burnout Inventory has been used in a variety of studies to study burnout, including with health professionals and teachers. Evidence adduced by Ahola et al. (2014) and Bianchi et al. (2014) suggests that the MBI is measuring a depressive condition.
Notes
References
Psychological tests and scales
Human resource management publications
Occupational stress
Organizational theory
Workplace
Motivation
Organizational behavior | Maslach Burnout Inventory | Biology | 1,737 |
32,751,590 | https://en.wikipedia.org/wiki/Flavocytochrome%20c%20sulfide%20dehydrogenase | Flavocytochrome c sulfide dehydrogenase, also known as Sulfide-cytochrome-c reductase (flavocytochrome c) (), is an enzyme with systematic name hydrogen-sulfide:flavocytochrome c oxidoreductase. It is found in sulfur-oxidising bacteria such as the purple phototrophic bacteria Allochromatium vinosum. This enzyme catalyses the following chemical reaction:
hydrogen sulfide + 2 ferricytochrome c sulfur + 2 ferrocytochrome c + 2 H+
These enzymes are heterodimers of a flavoprotein (fccB ) and a diheme cytochrome (fccA; ) that carry out hydrogen sulfide-dependent cytochrome C reduction. The diheme cytochrome folds into two domains, each of which resembles mitochondrial cytochrome c, with the two haem groups bound to the interior of the subunit. The flavoprotein subunit has a glutathione reductase-like fold consisting of a beta(3,4)-alpha(3) core, and an alpha+beta sandwich. The active site of the flavoprotein subunit contains a catalytically important disulfide bridge located above the pyrimidine portion of the flavin ring. The flavoprotein contains a C-terminal domain required for binding to flavin, and subsequent electron transfer. Electrons are transferred from the flavin to one of the haem groups in the cytochrome. Both FAD and heme C are covalently bound to the protein.
References
External links
EC 1.8.2
Protein domains | Flavocytochrome c sulfide dehydrogenase | Biology | 362 |
3,877,767 | https://en.wikipedia.org/wiki/Describing%20function | In control systems theory, the describing function (DF) method, developed by Nikolay Mitrofanovich Krylov and Nikolay Bogoliubov in the 1930s, and extended by Ralph Kochenburger is an approximate procedure for analyzing certain nonlinear control problems. It is based on quasi-linearization, which is the approximation of the non-linear system under investigation by a linear time-invariant (LTI) transfer function that depends on the amplitude of the input waveform. By definition, a transfer function of a true LTI system cannot depend on the amplitude of the input function because an LTI system is linear. Thus, this dependence on amplitude generates a family of linear systems that are combined in an attempt to capture salient features of the non-linear system behavior. The describing function is one of the few widely applicable methods for designing nonlinear systems, and is very widely used as a standard mathematical tool for analyzing limit cycles in closed-loop controllers, such as industrial process controls, servomechanisms, and electronic oscillators.
The method
Consider feedback around a discontinuous (but piecewise continuous) nonlinearity (e.g., an amplifier with saturation, or an element with deadband effects) cascaded with a slow stable linear system. The continuous region in which the feedback is presented to the nonlinearity depends on the amplitude of the output of the linear system. As the linear system's output amplitude decays, the nonlinearity may move into a different continuous region. This switching from one continuous region to another can generate periodic oscillations. The describing function method attempts to predict characteristics of those oscillations (e.g., their fundamental frequency) by assuming that the slow system acts like a low-pass or bandpass filter that concentrates all energy around a single frequency. Even if the output waveform has several modes, the method can still provide intuition about properties like frequency and possibly amplitude; in this case, the describing function method can be thought of as describing the sliding mode of the feedback system.
Using this low-pass assumption, the system response can be described by one of a family of sinusoidal waveforms; in this case the system would be characterized by a sine input describing function (SIDF) giving the system response to an input consisting of a sine wave of amplitude A and frequency . This SIDF is a modification of the transfer function used to characterize linear systems. In a quasi-linear system, when the input is a sine wave, the output will be a sine wave of the same frequency but with a scaled amplitude and shifted phase as given by . Many systems are approximately quasi-linear in the sense that although the response to a sine wave is not a pure sine wave, most of the energy in the output is indeed at the same frequency as the input. This is because such systems may possess intrinsic low-pass or bandpass characteristics such that harmonics are naturally attenuated, or because external filters are added for this purpose. An important application of the SIDF technique is to estimate the oscillation amplitude in sinusoidal electronic oscillators.
Other types of describing functions that have been used are DFs for level inputs and for Gaussian noise inputs. Although not a complete description of the system, the DFs often suffice to answer specific questions about control and stability. DF methods are best for analyzing systems with relatively weak nonlinearities. In addition the higher order sinusoidal input describing functions (HOSIDF), describe the response of a class of nonlinear systems at harmonics of the input frequency of a sinusoidal input. The HOSIDFs are an extension of the SIDF for systems where the nonlinearities are significant in the response.
Caveats
Although the describing function method can produce reasonably accurate results for a wide class of systems, it can fail badly for others. For example, the method can fail if the system emphasizes higher harmonics of the nonlinearity. Such examples have been presented by Tzypkin for bang–bang systems. A fairly similar example is a closed-loop oscillator consisting of a non-inverting Schmitt trigger followed by an inverting integrator that feeds back its output to the Schmitt trigger's input. The output of the Schmitt trigger is going to be a square waveform, while that of the integrator (following it) is going to have a triangle waveform with peaks coinciding with the transitions in the square wave. Each of these two oscillator stages lags the signal exactly by 90 degrees (relative to its input). If one were to perform DF analysis on this circuit, the triangle wave at the Schmitt trigger's input would be replaced by its fundamental (sine wave), which passing through the trigger would cause a phase shift of less than 90 degrees (because the sine wave would trigger it sooner than the triangle wave does) so the system would appear not to oscillate in the same (simple) way.
Also, in the case where the conditions for Aizerman's or Kalman conjectures are fulfilled, there are no periodic solutions by describing function method, but counterexamples with hidden periodic attractors are known. Counterexamples to the describing function method can be constructed for discontinuous dynamical systems when a rest segment destroys predicted limit cycles. Therefore, the application of the describing function method requires additional justification.
References
Further reading
N. Krylov and N. Bogolyubov: Introduction to Nonlinear Mechanics, Princeton University Press, 1947
A. Gelb and W. E. Vander Velde: Multiple-Input Describing Functions and Nonlinear System Design, McGraw Hill, 1968.
James K. Roberge, Operational Amplifiers: Theory and Practice, chapter 6: Non-Linear Systems, 1975; free copy courtesy of MIT OpenCourseWare 6.010 (2013); see also (1985) video recording of Roberge's lecture on describing functions
P.W.J.M. Nuij, O.H. Bosgra, M. Steinbuch, Higher Order Sinusoidal Input Describing Functions for the Analysis of Nonlinear Systems with Harmonic Responses, Mechanical Systems and Signal Processing, 20(8), 1883–1904, (2006)
External links
Electrical Engineering Encyclopedia: Describing Functions
Nonlinear control
Hidden oscillation | Describing function | Mathematics | 1,312 |
17,800,165 | https://en.wikipedia.org/wiki/Dendronized%20polymer | In polymer chemistry and materials science, dendronized polymers (British English: dendronised polymers; ) are linear polymers to every repeat unit of which dendrons are attached. Dendrons are regularly-branched, tree-like fragments and for larger ones the polymer backbone is wrapped to give sausage-like, cylindrical molecular objects. Figure 1 shows a cartoon representation with the backbone in red and the dendrons like cake slices in green. It also provides a concrete chemical structure showing a polymethylmethacrylate (PMMA) backbone, the methyl group () of which is replaced by a dendron of the third generation (three consecutive branching points).
Figure 1. Cartoon representation (left) and a concrete example of a third generation dendronized polymer (right). The peripheral amine groups are modified by a substituent X which often is a protection group. Upon deprotection and modification substantial property changes can be achieved. The subscript n denotes the number of repeat units.
Structure and applications
Dendronized polymers can contain several thousands of dendrons in one macromolecule and have a stretched out, anisotropic structure. In this regard they differ from the more or less spherically shaped dendrimers, where a few dendrons are attached to a small, dot-like core resulting in an isotropic structure. Depending on dendron generation, the polymers differ in thickness as the atomic force microscopy image shows (Figure 2). Neutral and charged dendronized polymers are highly soluble in organic solvents and in water, respectively. This is due to their low tendency to entangle. Dendronized polymers have been synthesized with, e.g., polymethylmethacrylate, polystyrene, polyacetylene, polyphenylene, polythiophene, polyfluorene, poly(phenylene vinylene), poly(phenylene acetylene), polysiloxane, polyoxanorbornene, poly(ethylene imine) (PEI) backbones. Molar masses up to 200,000,000 g/mol have been obtained. Dendronized polymers have been investigated for/as bulk structure control, responsivity to external stimuli, single molecule chemistry, templates for nanoparticle formation, catalysis, electro-optical devices, and bio-related applications. Particularly attractive is the use of water-soluble dendronized polymers for the immobilization of enzymes on solid surfaces (inside glass tubes or microfluidic devices) and for the preparation of dendronized polymer-enzyme conjugates.
Synthesis
The two main approaches into this class of polymers are the macromonomer route and the attach-to route. In the former, a monomer which already carries the dendron of final size is polymerized. In the latter the dendrons are constructed generation by generation directly on an already existing polymer. Figure 4 illustrates the difference for a simple case. The macromonomer route results in shorter chains for higher generations and the attach-to route is prone to lead to structure imperfections as an enormous number of chemical reactions have to be performed for each macromolecule.
History
The name “dendronized polymer” which meanwhile is internationally accepted was coined by Schlüter in 1998. The first report on such a macromolecule which at that time was called “Rod-shaped Dendrimer” goes back to a patent by Tomalia in 1987 and was followed by Percec's first mentioning in the open literature of a polymer with “tapered side chains” in 1992. In 1994 the potential of these polymers as cylindrical nanoobjects was recognized. Many groups worldwide contributed to this field. They can be found in review articles.
See also
dendrimer
polymer brush
References
Polymers
Soft matter | Dendronized polymer | Physics,Chemistry,Materials_science | 801 |
415,691 | https://en.wikipedia.org/wiki/Pith | Pith, or medulla, is a tissue in the stems of vascular plants. Pith is composed of soft, spongy parenchyma cells, which in some cases can store starch. In eudicotyledons, pith is located in the center of the stem. In monocotyledons, it extends only into roots. The pith is encircled by a ring of xylem; the xylem, in turn, is encircled by a ring of phloem.
While new pith growth is usually white or pale in color, as the tissue ages it commonly darkens to a deeper brown color. In trees pith is generally present in young growth, but in the trunk and older branches the pith often gets replaced – in great part – by xylem. In some plants, the pith in the middle of the stem may dry out and disintegrate, resulting in a hollow stem. A few plants, such as walnuts, have distinctive chambered pith with numerous short cavities (see image at middle right). The cells in the peripheral parts of the pith may, in some plants, develop to be different from cells in the rest of the pith. This layer of cells is then called the perimedullary region of the pithamus. An example of this can be observed in Hedera helix, a species of ivy.
The term pith is also used to refer to the pale, spongy inner layer of the rind, more properly called mesocarp or albedo, of citrus fruits (such as oranges) and other hesperidia. The word comes from the Old English word piþa, meaning substance, akin to Middle Dutch pitte (modern Dutch pit), meaning the pit of a fruit.
Uses
Food
The pith of the sago palm, although highly toxic to animals in its raw form, is an important human food source in Melanesia and Micronesia by virtue of its starch content and its availability. There is a simple process of starch extraction from sago pith that leaches away a sufficient amount of the toxins and thus only the starch component is consumed. Current processes for starch extraction are generally only about 50% efficient, however, with the other half remaining in residual pith waste. The form of the starch after processing is similar to tapioca.
Other foods sometimes mistakenly called piths include heart of palm (actually the core of the bud) and banana piths (actually the rolled up young leaves).
Pith helmets
The spongy wood of the pith wood plant or other similar species, often mistakenly called pith, was once used to make pith helmets.
Watch cleaning
Pith wood is a cleaning tool used in watchmaking to clean watch parts and tools. It is used to remove oil from the tips of tools to prevent the contamination of watch movements. A pith wood consists of a piece of pith (such as elder or mullein).
Light
Dried pith (which is actually the center of the leaf) of certain rush plants soaked in fat or grease, held using a rushlight, was used as home lighting. Beginning in the 17th century, it would continue to be used in this method until the mid-20th century. It saw a brief revival during World War 2.
See also
Papyrus
References
Plant anatomy
Plant morphology
Plant physiology
ln:Libúbú | Pith | Biology | 707 |
33,700,606 | https://en.wikipedia.org/wiki/Schrat | The Schrat () or Schratt, also Schraz or Waldschrat (forest Schrat), is a rather diverse German and Slavic legendary creature with aspects of either a wood sprite, domestic sprite and a nightmare demon. In other languages it is further known as Skrat.
Etymology
The word Schrat originates in the same word root as Old Norse skrati, skratti (sorcerer, giant), Icelandic (devil) and vatnskratti (water sprite), Swedish (fool, sorcerer, devil), and English (devil).
The German term entered Slavic languages and (via North Germanic languages) Finno-Ugric ones as well. Examples are Polish skrzat, skrzot (domestic sprite, dwarf), Czech (domestic sprite, gold bringing devil/mining sprite), Slovene (domestic sprite, mining sprite), and (corn or gold-bringing being, whirlwind, Polish plait) as well as Estonian , (domestic sprite, "treasure/wealth-bringer", comparable to Schratt).
Medieval attestations
The Schrat is first attested in Medieval sources. Old High German sources have scrato, scrat, scraz, scraaz, skrez,screiz, waltscrate (walt = forest), screzzolscratto, sklezzo, slezzo, and sletto (pl. scrazza, screzza, screza, waltscraze, waltsraze).
Middle High German sources give the forms schrat, schrate, waltschrate, waltschrat, schretel, schretelîn, schretlin, schretlein, schraz, schrawaz, schreczl, schreczlein, schreczlîn or schreczlin, and waltscherekken (forest terror; also the pl. schletzen).
In Old High German sources, the word is used to translate the Latin terms referring to wood sprites and nightmare demons, such as pilosi (hairy sprites), fauni (fauns), satiri, (satyrs), silvestres homines (forest humans), incubus, incubator, and larva (spirit of the dead). Accordingly, the earliest known Schrat was likely a furry or hairy fiend or an anthropomorphic or theriomorphic spirit dwelling in the woods and causing nightmares.
Middle High German sources continued to translate satyrus and incubus as Schrat, indicating it was considered a wood sprite or nightmare demon, but another vocabularium glossed Schrat as penates (domestic sprite).
The Middle High German story "Schrätel und Wasserbär (polar bear)" (13th cent.), where the spirit haunts a peasant's house in Denmark is considered "genuine" house spirit (kobold) material.
The medieval tradition of offering the or (i.e. ) pairs of little red shoes was preached as sin by Martin von Amberg (c. 1350–1400).
Diminutive forms
The Schrat is known by numerous diminutive forms, many of which take on the sense of Alp, a nightmare demon. That is to say, many of these carry the sense of an Alptraum (oppressive dream, nightmare) demon or sickness demon especially in the south (Cf. ). But Schrat diminutives may also refer to a house spirit (kobold, cf. ) or a stable-haunting being (that haunt stables and homes, shearing manes, braiding elflocks, and suckling on livestock and human mothers).
The diminutive form Schrätel, for example, is ambivalent, and is discussed below under both a "dream demon" and "household sprite", below.
To name other such forms, unsorted into specific spirit types: Schrätlein, Schrättlein; Schrättling; Schrötele, Schröttele, Schröttlich, Schreitel; Schrätzlein; and Schlaarzla, Schrähelein.
Wood sprite
The Waldschrat is a solitary wood sprite looking scraggily, shaggily, partially like an animal, with eyebrows grown together, and wolf teeth in its mouth., as summarized by Hans Pehl in the HdA.
But this is a hotchpotch profile put together from disparate sources. Grimm gave attestations of Waldschrat in medieval romances (Barlaam und Josaphat, Ulrich von Zatzikhoven's Lanzelet) and the poem "Waldschrat" which is a retelling of Bonerius Fable No. 91, none of which provide much physical description except being "dwarf" sized. The Schrat as Waldgeist is physically described as hairy in commentary by Karl Joseph Simrock, and is equated with Räzel (described further under ); in particular, the trait of the single joined eyebrow, is held to be common to the woodland schrat, the Alp, and sorcerers (cf. ), some capable of werewolf-transformation. The last bit (wolflike teeth) appears to be clipped from the description of the "" encountered in the Middle Dutch version of the story of St. Brendan's Voyage. These Walschrande were described as having swines' heads, wolves' teeth, human hands, and hounds' legs that were shaggy. Celtic origin has been argued in scholarship concerning the schrat in the Arthurian cycle works (e.g. Ulrich's Lanzelet, adaptation of Lancelot) and the legend on St. Brendan who was an Irish monk.
The Austrian Schrat (pl. Schratln) or Waldkobold looks like the creature as described above: it is small and usually solitary. The Schratln love the deep, dark forest and will move away if the forest is logged. The Schrat likes to play malicious pranks and tease evilly. If offended, it breaks the woodcutters' axes in two and lets trees fall in the wrong direction.
In the Swiss valley Muotatal, before 1638 there was an Epiphany procession called Greifflete associated with two female wood sprites, Strudeli and Strätteli, the latter being a derivative of Schrat.
Mining demon
A Schrattel can be a Goldteufel (gold devil) that can be made to serve a human, bringing his master gold or silver found in the Pusterwald region, according to the legend from Styria in Austria; the legend was recorded by in his novel Das Hochgericht vom Birkachwald.
Nightmare demon
The Alp of German folklore, in the strict sense, refers to an Alptraum (nightmare) causing demon, and is associated with pressure like a horse is riding on the sleeper, with stifling against the pillow, and hence respiratory and other sicknesses. This tends to be known by the name Schrat or its variants in Southern Germany and Switzerland, especially in regions with Alemannic dialect. Such a demon is also considered a sickness demon, as explained above.
Forms
The Alptraum nightmare was known locally under diminutive names such as : Schrättel in Switzerland; or Schrättlein; , Schrettele in Upper Swabia; around , Bühl, Wurmlingen in Swabia, or in "Munster valley" (, ) in Alsace.
Other forms are: , ; ; (corrupted forms based on German Schreck = fear or fright), (a corrupted form reminiscent of German Scherz = jest), , , and (Käppel = little cap).
In the historic state of Baden (particularly Swabia), the enters by crawling through the keyhole and sits on the sleeper's chest. It can also enter through the window as a black hen. The Swabian is named as the perpetrator of the "Alp-pressure" () bearing down upon the human sleeper's chest or throat.
Livestock dream spirits
In Tirol, it is said the or (Schrattel) to the livestock is similar to what the dream-demon (drude) is to humans. It supposedly pins down livestock ("Schrattl-pressure"), and the affected cattle, pigs, or hens lie down as if paralyzed or dead. Tirolian farmers try to guard against this sprite by crafting the ("Scrattl-gate") from wooden slats (five pieces of wood interlocked, like a sideways-turned "H" and "X" combined, cf. fig. right), and it is alleged hanging one in the henhouse has saved it.
In Switzerland, the sucks the udders of cows and goats dry and makes horses become schretig, i.e. fall ill. In Swabia, the Schrettele also sucks human breasts and animal udders until they swell, tangles horse manes, and makes Polish plaits. In Austria, The Schrat tangles horse tails and dishevels horse manes.
Witches, possessions, ghosts
Often, the nightmare demon Schrat is in truth a living human. This or (Schrat witch) can easily be identified due to their characteristic eyebrows grown together, the so-called Räzel or Rätzel trait, sometimes applied to the mysterious beings often associated with the Schrat. The appellations Raz, Räzel (Rätzel) was likely an apheresis of Shräzel (Shrätzel), according to Wilhelm Hertz.
In Swabia, the Schratt is a woman suffering from an hereditary ailment known as schrättleweis gehen or Schrattweisgehen (both: going in the manner of a Schrat) which is an affliction usually inherited from one's mother. The afflicted person will have to step out every night at midnight, i.e. the body will lie around as if dead but the soul will have left it in the shape of a white mouse. The Schratt is impelled to "press" (German drücken) something or someone, be it human, cattle, or tree. The nightly Drücken is very exhausting, making the Schratt ill. Only one thing can free the Schratt from her condition. She must be allowed to press the best horse in the stable to death.
According to other Swabian beliefs, the nightmare-bringing Schrat is a child who died unbaptized. In Baden, it is considered a deceased relative of the nightmare victim.
Protective amulets
The Schrat is further known to cause illnesses by shooting arrows. Its arrow is the belemnite (called , Schrat stone), but his stone can also be used to ward the spirit off. Beside the Schrattenstein, it also fears the pentagram (called Schrattlesfuß, Schrat foot in Swabia) and stones of the same name with dinosaur footprints. The Schrätteli can be exterminated by burning the bone whose appearance it takes when morning comes. The same is true for burning the straw caught at night, for in the morning it will become a woman covered with burns and never return again. If it is cut with a Schreckselesmesser (Schrat knife), a knife with three crosses on its blade, the Schrettele will also never return again. The Schrat can be kept out of stables by placing the aforementioned wooden Schratlgatter (Schrat fence) above the stable door, or using a convex mirror called Schratspiegel (Schrat mirror) which also works the same way.
Domestic sprite
Middle High German literature
In the Middle High German story "Schrätel und Wasserbär" (13th cent.), the kobold haunts a peasant's house, but the Danish king lodges there with the polar bear, and after the encounter with the "giant cat" the spirit is frightened away.
A version of this story set in a miller's house in Berneck (Bad Berneck im Fichtelgebirge), Upper Franconia, Bavaria, where a Holzfräulein replaces the Schrätel, and is killed by a "cat".
The Schrätel () as a peace-disturber or poltergeist also figures in the Tyrolean poet Hans Vintler's Die Pluemen der Tugent (completed 1411).
Local lore
The term Schrat (or its variants) is thought to have occurred more widely in the sense of "house sprite" in the past. According to belief from the 15th century, every house has a schreczlein which, if honored by the inhabitants of the house, gives its human owners property and honor.
But the sense of Schrat as a Hausgeist or kobold only survived in Southeastern Germany, and West Slavic Regions. More specifically, Schrat as domestic sprite is particularly known in Bavaria (the Upper Palatinate, the Fichtel Mountains extending to Czech; also Vogtland which spills into Saxony and Thuringia), and the Austrian provinces of Styria and Carinthia. In these parts (Southeastern Germany and Austria), the Schrat remains more akin to a domestic kobold, only occasionally appearing as an incubus. The form Schrezala was current in the Fichtelgebirge and Vogtland.
In Styria, the forms are glossed as penates (hearth deities) c. 1500. The Schratl of Carinthia is said to manifests itself as sunlight patterns on walls in and Lesachtal valleys, as a small blue flame or a red face popping out the window in ; he is considered invisible in , but perceptible by the noises in the walls similar to the cutting-sound of scythes, while the Carinthian (Schrat manikin) is also reputed to make knocking noises in the bedroom walls at night like a Kobold or poltergeist. The Schratl of Styria is said to be a grunting little man dressed in red or green.
In Styria and Carinthia, the Schratl dwells inside the stove, expecting to be given millet gruel for its services. In Styria, this stove or oven (called Schratlofen; Schrat stove) might also be a solitary rock formation or rock hole rather than a true stove. When summoned, it sits down on the doorstep.
In Carinthia, the Schratl can be intentionally driven away by gifting it clothes. The same motif is exhibited in the story of the (, Upper Franconia, Bavaria), except the grateful mistress of the house unwittingly gave clothes as reward to the helpful spirit because it was dressed in tatters. The that causes mischief in the stables is considered a type of kobold also, as it actually dwells in the house.
The schratl also is blamed for causing stabbing pains and "elflocks" (polish plaits), which are referred to locally as (standardized as ).
A tale from recounts how a man outwitted a Schratl by demanding he fill his boot with money, actually only the cut-off tube of his boot, attached to his roof-ridge. The sprite brought money day and night that spilled into a big pile without achieving his boot-full, and finally died of exhaustion.
The Polish skrzat (often equated with latawiec, 'the flying one') demands kasha (, porridge) for payment, and insists it is not overly hot.
Animal forms
The Schratel reputedly appears often in the guise of a cat or squirrel in Styria. Schratzl in the guise of a black cat was driven from Kirchberg an der Raab driven out into some ditch. Farmers in Donnersbachwald (in Styria) claimed the Schratl can appear as a chamois, buck-goat, or black dog.
The Schrattel in one tale appears as a black raven, in a tale of a man who contracted with the demon and loses his soul (, Styria). It is also commented that "Schratel" was once a name commonly given to a dog in Styria. In , in the vicinity of Radenstein (Rottenstein, Bad Kleinkirchheim), the caterpillar is called and thus identified as Schratel. The butterfly is sometimes called schrätteli, schrâtl, schràttele or schrèttele and accordingly identified with the nightmare demon Schrätteli. Sorcerers with unibrow (like the Schratel) are reputedly capable of sending an Alp in the guise of butterfly to people who are asleep (cf. § witches).
The Schretel appears as a butterfly according to the lore in the Tyrol region (Austria) as well as Sarganserland of the Canton of St. Gallen in Switzerland; in St. Gallen, the creature may appear also in the guise of a magpie, fox, or black cat.
Legend from Obermumpf, Aargau, Switzerland say that the , a sort of black magician also known widely in the Black Forests in Germany, could transform into a or a , or a red mouse, to creep up on people who are asleep, enter through open mouths and reach the heart, riding people, leaving them half-dead or paralyzed until expelled from the mouth. The sorcerer died but still loitered around as a spirit in the form of a black dog, and was finally purged by the Capuchin monks of the Franciscan order.
Egg-hatched, chicken-shaped
There is the motif recorded for kobolds under various names, across many regions including Pomerania that the sprite is born from an egg, laid by a hen. The Polish in Posen is reputedly born from a hen's egg of a certain peculiar shape, hatched after being kept in the armpit for a long time, and likewise in Kolberg (Kołobrzeg). A number of Polish anecdotes relate that the skrzat appears in the guise of a chicken, a black chicken, emaciated chicken, or flying bird with sparks flying.
Or else, the škrat could be bred from a black hen, or hired otherwise, but to obtain its services one had to sign away himself and his family sealed in a blood-signed, contract. Then, it would bring such items as the contractor desired to the window, and when carrying money it assumed the form of fire.
Dwarf
The Alsatian Schrätzmännel also appear as dwarves (German Zwerge, sg. Zwerg) dwelling in caves in the woods and mountains.
The same is true for the Razeln or Schrazeln in Upper Palatinate, whose cave dwellings are known as Razellöcher (Schrat holes). Other names for them are Razen, Schrazen, Strazeln, Straseln, and Schraseln. They dwell in the mountains and help the humans with their work, acting as domestic sprites. This they do at night, for they dislike to be seen. They only enter the homes of good people and bring good fortune upon them, expecting but the food left over on the dishes as their payment. Any other form of gratitude, especially gifts, will drive them away instead, for they will think their service has been terminated, and they will leave with tears. First they wait, then they eat, and after that they go into the baking oven for dancing and threshinge. Ten pairs or at least twelve Razen are said to fit inside an oven for threshing.
Connections with the devil, witches, and deceased souls
A red secretion left behind at trees by butterflies is said to be the blood of the Schrätlein or Schretlein who are wounded and chased by the devil (German Teufel). Conversely, the Schrat can also be identified as the devil itself.
Schrättlig is a synonym for witch (German Hexe). In Tyrol and the Sarganserland, the Schrättlig also is thought to be the soul of a deceased evildoer living among people as an ordinary human, particularly an old woman. It is able to take on animal appearance, and often harms humans, animals and plants, further causes storm and tempest, but can also become a luck-bringing domestic sprite identified with lares and penates.
The Schrat might also show behavior similar to the devil or witches. In Carinthia, whenever somebody wants to hang oneself, then a Schratt will come and nod in approval. The Schrat travels in the whirlwind as well, hence the whirlwind is known as Schretel or schrádl in Bavaria and the Burgenland respectively.
In Bavaria, and Tyrol, the souls of unbaptized children forming the retinue of Stempe (i.e. Perchta) are called Schrätlein. Like Perchta, the schretelen were offered food on Epiphany Day in 15th century Bavaria.
In Yiddish Folklore
Shretele
Among the Yiddish-speaking Jews of Eastern Europe, there is belief in the helper or wealth-multiplying spirit called , probably connected to Polish skrzat, (pl. ) which they might have brought with them when they came from Alsace and Southern Germany.
The shretele is very kind. It is described as a small elflike creature, more specifically a tiny, handsome, raggedly dressed little man. Shretelekh can be found in human homes where they like to help out, e.g. by finishing up the making of shoes overnight at a shoemaker's home. If given tiny suits in gratitude, they will stop working and sing that they look too glorious for work, dancing out of the house and leaving good fortune behind.
The shretele might also stretch out a tiny hand from the chimney corner, asking for food. If given e.g. some crackling (gribenes), it will make the kitchen work successful. For example, if pouring goose fat from a frying pan into containers, one might be able to do so for hours, filling all containers in the house without emptying the pan – until someone cusses about this. Cussing will drive the shretele away.
The shretele might also dwell under the bed. From there it might come out to rock the baby's cradle, give the baby a light slap to make it stop crying, or nip from a brandy bottle. A bottle from which a shretele has sipped will always remain full no matter how much is poured out.
Kapelyushnikl
In Yiddish folklore, the function of the nightmare demon belongs to another kind of legendary creature, the (Polish for hat maker; pl. ) is a hat-wearing little being bent on pestering and teasing horses. It can only be found in Slavic countries and might even be an original East European Jewish creation.
The kapelyushniklekh can appear as a male and female pair of tiny beings wearing little caps, the woman also having braided hair tied with pretty ribbons.
They love to ride horses all night, many kapelyushniklekh sitting on one horse, rendering the animal exhausted and sweating. Kapelyushniklekh prefer gray horses in particular. If one manages to snatch a cap from a kapelyushnikl, they will be driven away for good. Only the one who lost its cap will come and ask for its return, in exchange for a great deal of gold, though in daylight the gold will have turned into a pile of rocks.
They can also milk cows dry at night and steal the milk, but if caught and beaten they promise that, if spared, they will never return and that the amount of milk given by the cows will be double of what it originally used to be, which will come true.
In Scandinavian and Baltic folklore
In Scandinavian folklore, the skrat is a prankster out in the woods or fields, known for its horse laughs, and known particularly to spoil the finds of treasure-hunters, and if the man thinks he spotted a gold ring, the spirit will laugh it away before he has actually gained possession. Commentary classes it as a type of myling.
apheresis
The skrat or skratt is also known among the Estonian Swedes, and denotes a devil or ghost But this is more commonly called kratt (q.v.) (or krätt, rett, rätt), and is a household spirit equivalent of the German Schrat[t] The kratt more particularly is a "treasure-bearer" (wealth bringer), and the skrat or kratt will enrich his cohabitating farmer by stealing (milk, beer, money) from the neighbor.
This "treasure-bearer" has many aliases (around 30), much of which have different etymologies unrelated to Schrat. In appearance, the kratt (also puuk, nasok) is sometimes an artificial composite creature made of old junk, which is four- or three-legged (cf. the 2017 Estonian movie November); the subtype (raha means 'money) is a money-bringer, and often take the form of a human or the composite artificial creature already described. However, the kratt as a group known by various names, and take on the various shapes, including animals such as birds (roosters), dogs (black dogs), or snakes (serpent with a red comb). But even though Charles Dickens as travel writer reported the skratt as a generous wealth-bringing "fiery dragon", its typical appearance is that of "a huge fiery shape with a long tail", and modern scholarship insist that the kratt has never been described literally as a "flying serpent/dragon" per se in the Estonian folklore record (whereas the Belarussian parallel is the flying serpent ), even though the alias name ('spark tail') is evocative of a fiery serpent. The Estonian kratt'''s favourite food is porridge with butter (or "bread-and-butter and two or three types of porridges", which it demands as compensation), in contrast with the Belarusian flying serpent favouring fried eggs. Another point of contrast is that the Estonian kratt'' (or more generally the Finnish, Swedish, etc., Finnic, Finno-Ugric and Scandinavian "treasurer-bearer"), does not exhibit the secondary aspect of the "mythological lover", in contrast to the East (and West, South Slavic) "treasure bearer" which also seduces women, the examples of latter being the aforementioned Polish ('the flying one) and the Belarussian "flying serpent" (cf. fiery serpents for these).
Explanatory notes
References
Citations
Bibliography
Reprint 2000,
Band 1 (1927) Aal-Butzemann.
Band 3 (1931) Freen-Hexenschuss.
Band 4 (1932) Hieb- und stichfest-Knistern.
Band 5 (1933) Knoblauch-Matthias.
Band 6 (1935) Mauer-Pflugbrot
Band 7 (1936) Pflügen-Signatur.
Band 9 (1941) Waage-Zypresse
; Schocken Books, 2012 edition
German legendary creatures
Jewish legendary creatures
Sleep in mythology and folklore
Forest spirits
Household deities
Sprites (folklore)
Wild men
Kobolds
European legendary creatures | Schrat | Biology | 5,826 |
51,779,301 | https://en.wikipedia.org/wiki/Jetset%20Magazine | Jetset Magazine is an American lifestyle magazine founded in 2006, aimed at those with an affluent lifestyle. It is available as a quarterly print magazine and is distributed in private jets, private yachts, private jet terminals, yacht charters, exclusive resorts and events around the world. It is also available online with content created on a weekly basis.
Editors
Darrin Austin - entrepreneur, philanthropist
Robert Kiyosaki – entrepreneur, author, Rich Dad Poor Dad
Daymond John – entrepreneur; founder, FUBU
Barry LaBov – entrepreneur; chief executive officer
Ken McElroy – real estate investor; author
Tom Zenner – executive editor, Rylin Media; television sports anchor
Scott Walcheck – philanthropist and investor
Tami Austin - editor-in-chief
Readership
The magazines targeted readership is the wealthiest one percent. The print magazine is circulated exclusively to private airport lounges, private yachts, exclusive events and travel locations. It is selective with advertisers to ensure it retains its target audience.
Covers
The magazine has featured many celebrities and entrepreneurs on the cover, with exclusive interviews. These include Richard Branson, Tom Hanks, Margot Robbie, Jackie Chan, Daniel Craig, Tom Cruise, Dwyane Wade, Donald Trump, Dwayne Johnson, Ryan Reynolds, Scarlett Johansson, Chris Hemsworth, Charlize Theron and Hugh Jackman.
Controversy
In 2014, the magazine interviewed and filmed Dana White in his office where he revealed he had a piece of art which contained cocaine and black tar heroin within it.
Famously, the magazine featured American presidential candidate Donald Trump on the cover of its issue 4 in 2015 with exclusive pictures from inside his private jet.
Miss Jetset
The magazine runs an annual competition to find "Miss Jetset". Women from all around the world compete, while raising awareness for the Andrew McDonough B+ Foundation, a children's cancer charity.
2015 winner - Becca Tepper
2016 winner - Laura Lydall
2017 winner - Adaliz Martinez
2018 winner - Lara Sebastian
2019 winner - Enea Culverson
2020 winner - Janeilla Burns
2021 winner - Tanaya Peck
2022 winner - Jessica Ceballos
See also
List of United States magazines
References
External links
, the magazine's official website
2006 establishments in Arizona
Lifestyle magazines published in the United States
Quarterly magazines published in the United States
Visual arts magazines published in the United States
Design magazines
Entertainment magazines published in the United States
Magazines established in 2006
Magazines published in Arizona
Sailing magazines
Scottsdale, Arizona
Women's fashion magazines published in the United States | Jetset Magazine | Engineering | 509 |
24,469,423 | https://en.wikipedia.org/wiki/Beaver%20pipe | Beaver pipes are a non-destructive flow devices, a way of controlling beaver activity in an ecosystem. The process of building beaver pipes is quite simple, and often serves as a permanent way to prevent beavers from damming water.
Enzo Creek Nature Sanctuary
Managers of the watershed of the Enzo Creek Nature Sanctuary experienced the all too common problem with beavers: their tenacious desire to dam water. The water level at Enzo Creek Nature Sanctuary had been growing year after year, impacting the fauna in Hunt Marsh, an wetland which serves as both a waterfowl nesting area and refuge. Managers physically removed the dams from Enzo Creek, the only outlet to the watershed; however, beavers would quickly work to rebuild them. Lethal removal of beavers from the marsh was contrary to the mission of the sanctuary, so the non-lethal method of beaver pipes was approved and adopted.
Overview
The construction of these beaver pipes at Enzo Creek Nature Sanctuary was a two-day project for one person and involved the following steps:
Slowly lower the water level behind the dam.
Insert pipes across the notch.
Allow beavers to complete the project.
References
External links
Beaver Damage Control and Management Information
http://www.fs.fed.us/r2/psicc/leadville/Beaver-Document.pdf
Beavers
Rodents and humans
Wildlife conservation | Beaver pipe | Biology | 269 |
1,528,827 | https://en.wikipedia.org/wiki/330%20Adalberta | 330 Adalberta (prov. designation: ) is a stony asteroid from the inner regions of the asteroid belt, approximately 9.5 kilometers in diameter. It is likely named for either Adalbert Merx or Adalbert Krüger. It was discovered by Max Wolf in 1910. In the 1980s, the asteroid's permanent designation was reassigned from the non-existent object .
Discovery
Adalberta was discovered on 2 February 1910, by German astronomer Max Wolf at Heidelberg Observatory in southern Germany.
Previously, on 18 March 1892, another body discovered by Max Wolf with the provisional designation was originally designated , but was subsequently lost and never recovered (also see Lost minor planet). In 1982, it was determined that Wolf erroneously measured two images of stars, not asteroids. As it was a false positive and the body never existed, the name Adalberta and number "330" was then reused for this asteroid, . MPC citation was published on 6 June 1982 ().
Orbit and classification
The S-type asteroid orbits the Sun in the inner main-belt at a distance of 1.8–3.1 AU once every 3 years and 11 months (1,416 days). Its orbit has an eccentricity of 0.25 and an inclination of 7° with respect to the ecliptic. Adalbertas observation arc begins with its official discovery observation at Heidelberg in 1910.
Naming
This minor planet was named in honor of the discoverer's father-in-law, Adalbert Merx (after whom another minor planet 808 Merxia is also named). However it is also possible that it was named for Adalbert Krüger (1832–1896), a German astronomer and editor of the Astronomische Nachrichten, which was one of the first international journals in the field of astronomy. The naming citation was first mentioned in The Names of the Minor Planets by Paul Herget in 1955 ().
Physical characteristics
Rotation period
In 2013, a rotational lightcurve of Adalberta was obtained from photometric observations at Los Algarrobos Observatory in Uruguay. Light-curve analysis gave a well-defined rotation period of hours with a brightness variation of 0.44 magnitude ().
Diameter and albedo
According to the survey carried out by NASA's Wide-field Infrared Survey Explorer with its subsequent NEOWISE mission, Adalberta measures 9.11 kilometers in diameter, and its surface has an albedo of 0.256, while the Collaborative Asteroid Lightcurve Link assumes a standard albedo for stony asteroids of 0.20 and calculates a diameter of 9.84 kilometers using an absolute magnitude of 12.4.
Notes
References
External links
Lightcurve Database Query (LCDB), at www.minorplanet.info
Dictionary of Minor Planet Names, Google books
Asteroids and comets rotation curves, CdR – Observatoire de Genève, Raoul Behrend
Discovery Circumstances: Numbered Minor Planets (1)-(5000) – Minor Planet Center
000330
Discoveries by Max Wolf
Named minor planets
19100202
Recovered astronomical objects | 330 Adalberta | Astronomy | 624 |
51,529,881 | https://en.wikipedia.org/wiki/Nuclear%20calcium | The concentration of calcium in the cell nucleus can increase in response to signals from the environment. Nuclear calcium is an evolutionary conserved potent regulator of gene expression that allows cells to undergo long-lasting adaptive responses. The 'Nuclear Calcium Hypothesis’ by Hilmar Bading describes nuclear calcium in neurons as an important signaling end-point in synapse-to-nucleus communication that activates gene expression programs needed for persistent adaptations. In the nervous system, nuclear calcium is required for long-term memory formation, acquired neuroprotection, and the development of chronic inflammatory pain. In the heart, nuclear calcium is important for the development of cardiac hypertrophy. In the immune system, nuclear calcium is required for human T cell activation. Plants use nuclear calcium to control symbiosis signaling.
References
Neuroscience
Gene expression
Calcium | Nuclear calcium | Chemistry,Biology | 166 |
9,211,582 | https://en.wikipedia.org/wiki/Cochliobolus%20miyabeanus | Cochliobolus miyabeanus (teleomorph, formerly known as Helminthosporium oryzae, anamorph Bipolaris oryzae) is a fungus that causes brown spot disease in rice.
It was considered for use by the US as a biological weapon against Japan during World War II.
Hosts and symptoms
Brown spot of rice is a plant fungal disease that usually occurs on the host leaves and glume, as well as seedlings, sheaths, stems and grains of adult host plants. Hosts include Oryza (Asian rice), Leersia (Cutgrass), Zizania (Wild rice), and other species as well such as Echinochloa colona (junglerice) and Zea mays (maize).
Cochliobolus miyabeanus may cause a wide range of symptoms. General symptoms occurring on the hosts can be observed on many parts of the plant, including leaves, seeds, stems and inflorescences, along with the presence of brown spot. Discoloration of stems is another symptom develops from brown spot of rice disease. Oval-shaped brown spots are the fungal growth sign, which have grey colored center developed on host leaves. The fungus produces a toxin known as ophiobolin which inhibits the growth of roots, coleoptiles, and leaves. This pathogen has also been known to produce non-host specific toxins which suppress plant defenses, causing the characteristic brown spots on rice leaves.
Dark coffee-coloured spots appear in the panicle and severe attacks cause spots in the grain and loss of yield and milling quality.
Also, lesions on glumes and seeds occur if the pathogen associates with other fungi and insects. Such lesions may develop when favorable condition for sporulation is present.
Importance
Cochliobolus miyabeanus is an important plant pathogen because it causes a common and widespread rice disease that causes high level of crop yield losses. It was a major cause of the Bengal famine of 1943, where the crop yield was dropped by 40% to 90% and the death of 2 million people was recorded. It is a possible agroterrorism weapon. Other known severe crop loss cases caused by Cochliobolus miyabeanus are globally distributed. In the Philippines, rice seedling mortality rate has been recorded up to 60%. In India and Nigeria, it can reduce total crop yield by up to 40%. Similar losses are observed in Suriname and Sumatra.
Environment
There are several factors influencing the disease cycle and epidemics of brown spot of rice disease.
Rainfall and drought – The first factor affecting Cochliobolus miyabeanus life cycle is rainfall and drought. It tends to proliferate when there is reduced rainfall and in dewy conditions. In addition to a low level of precipitation, severe epidemics of rice brown spot occur during drought season. Compared to well-flooded or irrigated areas, disease occurrence is favored in drier environments where a reduced amount of water is present.
Temperature and humidity – Another factor affecting disease development for Cochliobolus miyabeanus is temperature and humidity. Infection efficiency is influenced by the humidity level of the leaves, and lowered minimum temperature for crop cultivation favors epidemics of this disease. Infection by this pathogen is favored by long durations of leaf wetness; however, this disease has even been reported without free water when humidity levels are above 89%. Cochliobolus miyabeanus grows well at lower temperatures during its developmental stages compared to the developed stage, so if high temperatures are maintained in the area it is likely that farmers can restrict the growth of this pathogen. The optimal temperature for the pathogen is between 20 and 30 °C, however the pathogen can occur anywhere between 16 and 36 °C.
Nutrition level – Nutrition of the host plant may also influence the level of disease development. For example, low soil nutrient content is associated with epidemics of rice brown spot. If soil minerals such as nitrogen, potassium, phosphorus, silicon and manganese are deficient, this will likely favor disease development. In specific, in areas where silicon is present in a high amount in the soil, the host becomes less susceptible to this disease because silicon not only alleviates physiological stresses of the host, but also promotes disease resistance ability in the host. Furthermore, soil moisture level contributes to disease occurrence. Brown spot of rice is favored in areas where water content is low in soil.
Management
Prevention
The spread of the fungus can be prevented by using certified disease-free seed and using available resistant varieties such as MAC 18.
Avoiding dense sowing will can also help prevent the spread of the fungus as it reduces humidity.
Maintaining control of weeds and removal of volunteer crops in the field can also prevent fungal spread, as well as burning the stubble of infected plants.
Seed treatments can also be used as a preventative measure. Seeds can be treated with fungicides or alternatively soaking seeds in cold water for 8 hours before treating with hot water (53–54 °C) for 10–12 minutes prior to planting.
Soil treatments can also be used to prevent the spread of C. miyabeanus. The addition of potassium and calcium if the soil is deficient can help boost disease resistance. However, excessive application of nitrogen fertilisers should be avoided.
Control
Once symptoms are observed the disease may be controlled by burning removal and burning of any plants and maintaining water levels up to 3 inches at grain formation. below grain formation.
See also
list of rice diseases
References
World Food Crisis: Meeting the demands of a growing population by Jeff Batten, APS/CPS Annual Meeting, Monday, August 9, 1999
Sources
Fungal plant pathogens and diseases
Rice diseases
Cochliobolus
Fungus species | Cochliobolus miyabeanus | Biology | 1,173 |
71,409,538 | https://en.wikipedia.org/wiki/Necrobotics | Necrobotics is the practice of using biotic materials (or dead organisms) as robotic components. In July 2022, researchers in the Preston Innovation Lab at Rice University in Houston, Texas published a paper in Advanced Science introducing the concept and demonstrating its capability by repurposing dead spiders as robotic grippers and applying pressurized air to activate their gripping arms.
Necrobotics utilizes the spider's organic hydraulic system and their compact legs to create an efficient and simple gripper system. The necrobotic spider gripper is capable of lifting small and light objects, thereby serving as an alternative to complex and costly small mechanical grippers.
Background
The main appeal of the spider's body in necrobotics is its compact leg mechanism and use of hydraulic pressure. The spider's anatomy utilizes a simple hydraulic (fluid) pressure system. Spider legs have flexor muscles that naturally constrict their legs when relaxed. A force is required to straighten and extend their legs, which spiders accomplish by pumping hemolymph fluid (blood) through their joints as a means of hydraulic pressure. It takes no external power to curl their legs due to their flexor muscles' natural curled state.
In July 2022, researchers in the Preston Innovation Lab at Rice University published a paper detailing their experiments with the gripper. Although dead spiders no longer produce hemolymph, Te Faye Yap (lead author and mechanical engineering graduate) found that pumping air through a needle into the spider's cephalothorax accomplishes the same results as hemolymph. The original hydraulic (fluid) system is essentially converted into a pneumatic (air) system.
Fabrication
Obtain a spider (preferably a wolf spider)
Euthanize the spider using a cold temperature of around -4°C for 5-7 days
Insert a 25 gauge hypodermic needle into the spider's cephalothorax (main body)
Apply glue around the needle to form a seal and allow it to dry
Connect a syringe or pump to the needle
Extend the spider's legs by pumping air in
Testing and Data
Internal Force Versus Gripping Force
The typical pressure in a resting spider's legs ranges from 4 kPa to 6.1 kPa. Researchers extended the legs by increasing the spider's internal pressure to 5.5 kPa. Pumping air into the body increases the internal pressure, causing the legs to expand. Pumping air out of the body decreases internal pressure, causing the legs to contract due to their flexor leg muscles. When the internal pressure decreases to 0 kPa, the gripper would be fully closed, allowing for the gripper to grasp objects. This action demonstrates that as internal pressure decreases, the gripping force increases. Inversely, when internal pressure increases, the gripping force decreases. By gripping individual weighted acetate beads, it is found that the necrobotic gripper achieves a maximum gripping force of 0.35 milinewtons.
Spider Weight Versus Gripping Force
To estimate the gripping forces of smaller and larger spiders, researchers created a plot to predict the gripping force relative to the size of the spider. The wolf spider's body weight is relatively equal to the gripping force of its legs. The mass of the gripper is 33.5 mg and can lift 1.3 times its body weight (43.6 mg or 0.35 mN). However, with larger spiders, the gripping force relative to body weight decreases. For example, a 200-gram goliath birdeater is predicted to lift 10% of its weight (20 grams or 196 mN). Though there is an inverse relationship between spider mass and gripping force, larger spiders exert greater gripping forces than smaller spiders.
Gripper Lifespan
The necrobotic gripper's functionality is entirely reliant on the structural integrity of the spider. If the spider were to break down easily and frequently, the gripper would not be practical. Using cyclic testing, a series of repeated actions, it is found that the necrobotic gripper can actuate 700 to 1000 times. After 1000 cycles, cracks begin forming on the membrane of the leg joints due to dehydration. Weakened and decomposing joints lead to frequent breakage and replacement, thereby serving as an obstacle in applying necrobotics to real-world scenarios.
One theorized fix to this issue is applying beeswax or a lubricant to the joints. Researchers found that over 10 days, the mass of an uncoated spider decreased 17 times more than the mass of a spider coated with beeswax. Lubricating joints combats dehydration and slows the loss of organic material.
Applications
Necrobotics can serve as a fast and precise alternative to mechanical components that are difficult to manufacture. Due to small mechanical grippers being costly and complex, the necrobotic gripper can be used as a replacement. Fabricating these pneumatic spider grippers can be done in under 30 minutes and have a relatively long lifespan of 1000 cycles. The necrobotic gripper is ideal for processes requiring delicate handling of materials and maneuvering light objects into tight spaces. There may also be applications in microelectronics where necrobotic grippers can handle simple pickup and dropping actions.
Besides the necrobotic spider gripper, there are no other robotic concepts under the necrobotics subfield. Future necrobotic concepts can utilize soft robotics and electrical stimuli to repurpose biotic material into biohybrid systems. Another application of necrobotics is utilizing preexisting bone structures to house robotic components.
Constraints
With the usage of organic material, there is a higher chance of the component decomposing and breaking down as opposed to traditional mechanical systems. There may be additional work and management required to replace these grippers if they fail. Additionally, organic inconsistencies with the spiders will yield inaccurate results. Not all wolf spiders develop the same, so gripping force and leg contraction can vary between grippers.
There are moral implications behind euthanizing spiders for robotics. The ethical boundaries that necrobotics push in the pursuit of biohybrid systems raise concerns, as opponents say it may lead to the hybridization of mammals and is intrusive to nature. Proponents respond that repurposing dead animals has been human practice for millennia and that necrobotics should be pursued to advance science.
See also
3D bioprinting
Biomedical engineering
Blood substitute
Remote control animal
Soft robotics
References
Robotics
Undead
Biorobotics | Necrobotics | Engineering | 1,336 |
75,453,455 | https://en.wikipedia.org/wiki/Cyclin%20E/Cdk2 | The Cyclin E/Cdk2 complex is a structure composed of two proteins, cyclin E and cyclin-dependent kinase 2 (Cdk2). Similar to other cyclin/Cdk complexes, the cyclin E/Cdk2 dimer plays a crucial role in regulating the cell cycle, with this specific complex peaking in activity during the G1/S transition. Once the cyclin and Cdk subunits join together, the complex gets activated, allowing it to phosphorylate and bind to downstream proteins to ultimately promote cell cycle progression. Although cyclin E can bind to other Cdk proteins, its primary binding partner is Cdk2, and the majority of cyclin E activity occurs when it exists as the cyclin E/Cdk2 complex.
G1/S transition
Across eukaryotic cell types, the cell cycle is highly conserved, and the cyclin/Cdk complexes are consistently essential in driving the entire process forwards. Shortly before the end of G1 phase, cyclin E joins with Cdk2 to activate its serine-threonine kinase activity and thus promote entry into S phase.
Eukaryotic cells possess two types of cyclin, cyclin E1 and cyclin E2, with the protein sequences sharing 69.3% similarity in humans despite being encoded by two different genes. While there is significant overlap in function between the two cyclin Es, there are distinct differences in the roles and regulation of each cyclin E type. For example, in Xenopus laevis embryos only cyclin E1 is necessary for viability.
In living cells, over-expression (an excess amount) of either cyclin E type results in an earlier activation of the cyclin E/Cdk2 complex and the subsequent shortening of G1 phase and thus accelerated movement into S phase. The cyclin E/Cdk2 complex is not only important in regulating the G1/S transition, but in fact necessary and sufficient, as cells lacking functional cyclin E are unable to enter S phase, remaining forever arrested in G1.
Complex activation
The cyclin E protein contains a section called the cyclin box, which interacts with the PSTAIRE helix on Cdk2 to enact a conformational change in Cdk2's T loop. The resulting exposure of Cdk2's catalytic site enables Cdk activating kinase (CAK) to phosphorylate Cdk2, allowing full activation of the cyclin E/Cdk2 complex. Once the protein dimer is formed and activated, it phosphorylates several important proteins including "proteins involved in centrosome duplication (NPM, CP110, Mps1), DNA synthesis (Cdt1), DNA repair (Brca1, Ku70), histone gene transcription (p220/NPAT, CBP/p300, HIRA) and Cdk inhibitors p21Waf1/Cip1 or p27Kip1." The complex interacts with its substrates due to two distinct regions of the cyclin E protein–the MRAIL and VDCLE domains. MRAIL is located at the N-terminus of cyclin E's cyclin box and interacts with proteins containing an RLX sequence (argininine-leucine-any amino acid) such as Rb, and p27KIP1. VDCLE is located at cyclin E's C-terminal region and interacts with proteins of the retinoblastoma family including Rb1, p107, and p130.
Localization
Cyclin E is predominantly found in the cell nucleus, and although it shuttles between the nucleus and the cytoplasm, it typically appears as a nuclear protein in images as its nuclear import is more rapid than its export. Cyclin E's nuclear localization sequence (NLS) allows the cyclin E/Cdk2 complex to readily enter the nucleus, although other mechanisms are believed to help the complex localize to the region as well. Cyclin E also contains a centrosome localization sequence (CLS) that plays a key role in allowing the cyclin E/Cdk2 complex to control centrosome duplication during early S phase.
Retinoblastoma protein
Background–phosphorylation
The retinoblastoma tumor suppressor protein (Rb) plays a key regulatory role in several cellular activities, such as the G1 restriction checkpoint, the DNA damage checkpoint, cell cycle exit, and cellular differentiation. As its full name suggests, cells containing mutations in pathways upstream of Rb or in the protein itself (however this case is more rare), are often cancerous. In fact, the majority of human cancer cells contain mutations in proteins responsible for phosphorylating Rb, such as deletions (p16) or over-expressions (cyclin D, Cdk4, Cdk6).
Within its structure, Rb contains 16 possible sites for phosphorylation by other proteins. Surprisingly, however, it exists in only 3 possible states: un-phosphorylated (no sites phosphorylated), mono-phosphorylated (one site phosphorylated), or hyper-phosphorylated (all available sites phosphorylated). In G0 phase, Rb exists solely in its un-phosphorylated form, but in early G1 phase, the Cyclin D:Cdk4/6 complex adds one phosphate group and the protein remains in its mono-phosphorylated form until late G1 when it is rapidly hyper-phosporylated by the Cyclin E/Cdk 2 complex.
Cell cycle progression
The key mechanism through which the cyclin E/Cdk2 complex is able to promote S phase progression is through Rb and E2F transcription factors. Transcription factors (TF) regulate the rate at which specific target genes are transcribed from DNA to RNA, i.e. transcription. At the end of G1, cells move through the restriction point–essentially "the point of no return" as cells that pass through are irreversibly committed to division and extracellular signals are no longer required for cell cycle progression. The rapid accumulation and activation of the cyclin E/Cdk 2 complex through positive feedback loops drives the cell forward through G1.
After phosphorylation by Cyclin D:Cdk4/6, mono-phosphorylated Rb binds to E2F family proteins, preventing their target genes from being transcribed; interestingly, one of the target genes is cyclin E. The rate-limiting switch-like step to initially activate the cyclin E/Cdk2 complex after Rb mono-phosphorylation is currently unknown, but it is hypothesized that the activation is regulated by an unidentified metabolic sensor, such that once the necessary metabolic threshold has been exceeded, the sensor activates Cyclin E/Cdk2. The metabolic sensor's activation of the cyclin E/Cdk2 complex initiates the process of Rb hyper-phosphorylation of Rb.
Mono-phosphorylated Rb inactivates E2F TFs, but hyper-phosphorylation of Rb results in Rb inactivation, causing the release of E2F proteins from the Rb binding cleft and consequent activation of the E2F family proteins to initiate transcription of their target genes. As a result, more cyclin E is transcribed and more cyclin E/Cdk2 complex is formed and activated. Thus, since cyclin E/Cdk2 activates its transcription factors, cyclin E/Cdk2 can facilitate its own activation, leading to a rapid accumulation of the complex and simultaneous rapid hyper-phosphorylation (i.e. inactivation) of Rb. The rapid inactivation of Rb causes a sudden switch-like transition through the late G1 restriction point (and into S phase). In summary, cyclin E/Cdk2's inactivation of Rb activates E2F which activates more cylin E (and thus the cyclin E/Cdk2 complex), creating a strong positive feedback loop that results in sudden inactivation of Rb and the irreversible push out of G1 and into S phase.
References
Cell cycle regulators | Cyclin E/Cdk2 | Chemistry | 1,792 |
1,508,678 | https://en.wikipedia.org/wiki/Kansei%20engineering | Kansei engineering (Japanese: 感性工学 kansei kougaku, emotional or affective engineering) aims at the development or improvement of products and services by translating the customer's psychological feelings and needs into the domain of product design (i.e. parameters). It was founded by Mitsuo Nagamachi, professor emeritus of Hiroshima University (also former Dean of Hiroshima International University and CEO of International Kansei Design Institute). Kansei engineering parametrically links the customer's emotional responses (i.e. physical and psychological) to the properties and characteristics of a product or service. In consequence, products can be designed to bring forward the intended feeling.
It has been adopted as one of the topics for professional development by the Royal Statistical Society.
Introduction
Product design has become increasingly complex as products contain more functions and have to meet increasing demands such as user-friendliness, manufacturability and ecological considerations. With a shortened product lifecycle, development costs are likely to increase. Since errors in the estimations of market trends can be very expensive, companies therefore perform benchmarking studies that compare with competitors on strategic, process, marketing, and product levels. However, success in a certain market segment not only requires knowledge about the competitors and the performance of competing products, but also about the impressions which a product leaves to the customer. The latter requirement becomes much more important as products and companies are becoming mature. Customers purchase products based on subjective terms such as brand image, reputation, design, impression etc.. A large number of manufacturers have started to consider such subjective properties and develop their products in a way that conveys the company image. A reliable instrument is therefore needed: an instrument which can predict the reception of a product on the market before the development costs become too large.
This demand has triggered the research dealing with the translation of the customer's subjective, hidden needs into concrete products. Research is done foremost in Asia, including Japan and Korea. In Europe, a network has been forged under the 6th EU framework. This network refers to the new research field as "emotional design" or "affective engineering".
History
People want to use products that are functional at the physical level, usable at the psychological level and attractive at the emotional level. Affective engineering is the study of the interactions between the customer and the product at that third level. It focuses on the relationships between the physical traits of a product and its affective influence on the user. Thanks to this field of research, it is possible to gain knowledge on how to design more attractive products and make the customers satisfied.
Methods in affective engineering (or Kansei engineering) is one of the major areas of ergonomics (human factor engineering). The study of integrating affective values in artifacts is not new at all. Already in the 18th century philosophers such as Baumgarten and Kant established the area of aesthetics. In addition to pure practical values, artifacts always also had an affective component. One example is jewellery found in excavations from the Stone Ages. The period of Renaissance is also a good example.
In the middle of the 20th century, the idea of aesthetics was deployed in scientific contexts. Charles E. Osgood developed his semantic differential method in which he quantified the peoples' perceptions of artifacts. Some years later, in 1960, Professors Shigeru Mizuno and Yoji Akao developed an engineering approach in order to connect peoples' needs to product properties. This method was called quality function deployment (QFD). Another method, the Kano model, was developed in the field of quality in the early 1980s by Professor Noriaki Kano, of Tokyo University. Kano's model is used to establish the importance of individual product features for the customer's satisfaction and hence it creates the optimal requirement for process oriented product development activities. A pure marketing technique is conjoint analysis. Conjoint analysis estimates the relative importance of a product's attributes by analysing the consumer's overall judgment of a product or service. A more artistic method is called Semantic description of environments. It is mainly a tool for examining how a single person or a group of persons experience a certain (architectural) environment.
Although all of these methods are concerned with subjective impact, none of them can translate this impact to design parameters sufficiently. This can, however, be accomplished by Kansei engineering. Kansei engineering (KE) has been used as a tool for affective engineering. It was developed in the early 70s in Japan and is now widely spread among Japanese companies. In the middle of the 90s, the method spread to the United States, but cultural differences may have prevented the method to enfold its whole potential.
Procedure
As mentioned above, Kansei engineering can be considered as a methodology within the research field of 'affective engineering'. Some researchers have identified the content of the methodology. Shimizu et al. state that 'Kansei Engineering is used as a tool for product development and the basic principles behind it are the following: identification of product properties and correlation between those properties and the design characteristics'.
According to Nagasawa, one of the forerunners of Kansei engineering, there are three focal points in the method:
How to accurately understand consumer Kansei
How to reflect and translate Kansei understanding into product design
How to create a system and organization for Kansei orientated design
A model on methodology
Different types of Kansei engineering are identified and applied in various contexts. Schütte examined different types of Kansei engineering and developed a general model covering the contents of Kansei engineering.
Choice of Domain
Domain in this context describes the overall idea behind an assembly of products, i.e. the product type in general. Choosing the domain includes the definition of the intended target group and user type, market-niche and type, and the product group in question. Choosing and defining the domain are carried out on existing products, concepts and on design solutions yet unknown. From this, a domain description is formulated, serving as the basis for further evaluation. The process is necessary and has been described by Schütte in detail in a couple of publications.
Span the Semantic Space
The expression Semantic space was addressed for the first time by Osgood et al.. He posed that every artifact can be described in a certain vector space defined by semantic expressions (words). This is done by collecting a large number of words that describe the domain. Suitable sources are pertinent literature, commercials, manuals, specification list, experts etc. The number of the words gathered varies according to the product, typically between 100 and 1000 words. In a second step the words are grouped using manual (e.g. Affinity diagram) or mathematical methods (e.g. factor and/or cluster analysis). Finally a few representing words are selected from this spanning the Semantic Space. These words are called "Kansei words" or "Kansei Engineering words".
Span the Space of Properties
The next step is to span the Space of Product Properties, which is similar to the Semantic Space. The Space of Product Properties collects products representing the domain, identifies key features and selects product properties for further evaluation.
The collection of products representing the domain is done from different sources such as existing products, customer suggestions, possible technical solutions and design concepts etc. The key features are found using specification lists for the products in question. To select properties for further evaluation, a Pareto-diagram can assist the decision between important and less important features.
Synthesis
In the synthesis step, the Semantic Space and the Space of Properties are linked together, as displayed in Figure 3. Compared to other methods in Affective Engineering, Kansei engineering is the only method that can establish and quantify connections between abstract feelings and technical specifications. For every Kansei word a number of product properties are found, affecting the Kansei word.
Synthesis
The research into constructing these links has been a core part of Nagamachi's work with Kansei engineering in the last few years. Nowadays, a number of different tools is available. Some of the most common tools are :
Category Identification
Regression Analysis /Quantification Theory Type I
Rough Sets Theory
Genetic Algorithm
Fuzzy Sets Theory
Model building and Test of Validity
After doing the necessary stages, the final step of validation remains. This is done in order to check if the prediction model is reliable and realistic. However, in case of prediction model failure, it is necessary to update the Space of Properties and the Semantic Space, and consequently refine the model.
The process of refinement is difficult due to the shortage of methods. This shows the need of new tools to be integrated. The existing tools can partially be found in the previously mentioned methods for the synthesis.
Software tools
Kansei engineering has always been a statistically and mathematically advanced methodology. Most types require good expert knowledge and a reasonable amount of experience to carry out the studies sufficiently. This has also been the major obstacle for a widespread application of Kansei engineering.
In order to facilitate application some software packages have been developed in the recent years, most of them in Japan. There are two different types of software packages available: User consoles and data collection and analysis tools. User consoles are software programs that calculate and propose a product design based on the users' subjective preferences (Kanseis). However, such software requires a database that quantifies the connections between Kanseis and the combination of product attributes. For building such databases, data collection and analysis tools can be used. This part of the paper demonstrates some of the tools. There are many more tools used in companies and universities, which might not be available to the public.
User consoles
Software
As described above, Kansei data collection and analysis is often complex and connected with statistical analysis. Depending on which synthesis method is used, different computer software is used. Kansei Engineering Software (KESo) uses QT1 for linear analysis. The concept of Kansei Engineering Software (KESo) Linköping University in Sweden. The software generates online questionnaires for collection of Kansei raw-data
Another software package (Kn6) was developed at the Polytechnic University of Valencia in Spain.
Both software packages improve the collection and evaluation of Kansei data. In this way even users with no specialist competence in advanced statistics can use Kansei engineering.
See also
Affective computing
Gandhian engineering – for low cost, frugal, large distribution product design.
Fahrvergnügen
Japanese quality
References
External links
KANSEI Innovation (Hiroshima, JAPAN)
European Kansei Engineering group
Ph.D thesis on Kansei Engineering (europe)
Ph.D thesis on Website Emotional UX and Kansei Engineering
The Japan Society of Kansei Engineering
The Malaysian Research Intensive Group for Kansei/Affective Engineering
International Conference on Kansei Engineering & Intelligent Systems KEIS
QFD Institute
KESoft
Engineering disciplines | Kansei engineering | Engineering | 2,192 |
61,198,269 | https://en.wikipedia.org/wiki/OpenPsych | OpenPsych is an online collection of three pseudoscientific open access journals covering behavioral genetics, psychology, and quantitative research in sociology. Many articles on OpenPsych promote scientific racism, and the site has been described as a "pseudoscience factory-farm". The journals were started in 2014 by a pair of nonprofessional researchers, Emil Kirkegaard and Davide Piffer, who had difficulty publishing their studies in mainstream peer-reviewed scientific journals. The website describes its contents as open peer reviewed journals, but the qualifications and neutrality of its reviewers and quality of reviews have been disputed.
Founders
OpenPsych was founded by Danish white supremacist Emil Kirkegaard, the registrant of the Mankind Quarterly website. Kirkegaard has controversially pushed for the legalization of child pornography and legally changed his name to William Engman in 2021. Davide Piffer has written on remote viewing which is widely dismissed by scientists as parapsychology.
Journal contents and quality
OpenPsych consists of three journals — Open Differential Psychology, Open Behavioral Genetics, and Open Quantitative Sociology & Political Science — founded by Emil Kirkegaard and Davide Piffer in 2014. Journal contents are free to access and there is no cost associated with submission. The founders of the website believed that their articles were being regularly rejected by mainstream scientific publishers because of bias against their contentious submissions. Many of the articles are about "race realism", a form of scientific racism, and advance related views which are rejected by mainstream science, such as the idea that there is a genetic basis for group-level differences in measures such as crime and IQ. Unlike typical scientific journals, OpenPsych accepts anonymous manuscripts.
Academic reception
The quality of peer review at OpenPsych has been disputed. Reviewers do not need advanced academic qualifications, nor need to specialise in what they review. For example, Kirkegaard reviews paper submissions to two of the journals, but has only a BA in linguistics, claiming he is entirely "self-taught". Most of the reviewers are also authors of articles in the same group of journals. Of the thirteen known members of the review board in 2020, two were anonymous and eight seemed to have doctorates. Members of the review teams include Gerhard Meisenberg, Heiner Rindermann, Peter Frost, John Fuerst, Kenya Kura, Bryan J. Pesta, Noah Carl and Meng Hu.
Political positions
The journals act as a research network for far right, alt-right, and White nationalist causes, following in the footsteps of the Pioneer Fund and Mankind Quarterly; of its top 15 contributors in 2018, 11 had written for Mankind Quarterly in the preceding three years. Several members of its editorial board hold far-right political views and have attended the controversial London Conference on Intelligence. The Southern Poverty Law Center, in an article discussing proponents of scientific racism including Kirkegaard, describes OpenPsych as a "pseudojournal". Kirkegaard is regarded by the Centre for Analysis of the Radical Right to be a "figure on the radical right fringe". Landis MacKellar has described Emil Kirkegaard and John Fuerst as "both outright cranks" noting OpenPsych are "tenderly peer-reviewed online journals specializing in scientifically controversial (bordering on dubious) politically incorrect pieces derived in part from (Roger) Pearsonian hereditarianism."
Review process
Eric Turkheimer in a coauthored paper in Perspectives on Psychological Science criticises the review process of OpenPsych's journals and describes them as "pseudo-scientific vehicles for scientific racism":
Controversies
OKCupid
In May 2016, Kirkegaard and Julius Daugbjerg Bjerrekær published a paper in Open Differential Psychology that includes the data of nearly 70,000 OkCupid (a dating website) users, such as their intimate sexual details. The publication was widely criticised at the time and been described as "without a doubt one of the most grossly unprofessional, unethical and reprehensible data releases." Although Kirkegaard claimed the data was public, this was disputed by data ethics scholar Michael Zimmer who pointed out that the data is restricted to logged-in users only:
Kirkegaard uploaded the OkCupid data to the Open Science Framework, but this was later removed after OkCupid filed a Digital Millennium Copyright Act (DMCA) complaint.
Noah Carl
In April 2019, Noah Carl who reviews submissions for Open Quantitative Sociology & Political Science was dismissed as a research fellow at St Edmund's College, Cambridge University because of his association with OpenPsych, which involved collaborating with a number of individuals who are known to hold racist and far-right political views.
References
External links
2014 establishments in Denmark
Academic publishing companies
Academic journals established in 2014
Fringe science journals
Open access publishers
Publishing companies established in 2014
Race and intelligence controversy
Scientific racism | OpenPsych | Biology | 994 |
59,511,062 | https://en.wikipedia.org/wiki/Aitken%20interpolation | Aitken interpolation is an algorithm used for polynomial interpolation that was derived by the mathematician Alexander Aitken. It is similar to Neville's algorithm.
See also Aitken's delta-squared process or Aitken extrapolation.
External links
Polynomials
Interpolation | Aitken interpolation | Mathematics | 62 |
1,493,763 | https://en.wikipedia.org/wiki/Nike-Iroquois | Nike Iroquois is the designation of a two-stage American sounding rocket. The Nike Iroquois was launched 213 times between 1964 and 1978. The maximum flight height of the Nike Iroquois amounts to 290 km (950,000 ft), the takeoff thrust 48,800 lbf (217 kN), the takeoff weight 700 kg and the length 8.00 m.
References
Nike-Iroquois at Encyclopedia Astronautica
Nike (rocket family) | Nike-Iroquois | Astronomy | 86 |
226,096 | https://en.wikipedia.org/wiki/List%20of%20aircraft%20by%20date%20and%20usage%20category | This is a list of aircraft by date and usage. The date shown is the introduction of the first model of a line but not the current model. For instance, while "the most popular" aircraft, such as Boeing 737 and 747 were introduced in 1960x, their recent models were revealed in the 21st century.
Civil aircraft
Civil air transport
Civil – general aviation
Military aircraft
Fighters
Bombers
Reconnaissance, electronic warfare and Airborne Early Warning
Carrier-based aircraft
Air support/attack aircraft
Training aircraft
Transport
Helicopters and Autogyros
Racing aircraft
Experimental aircraft
Seaplanes and Amphibians
References
Date and usage category
Aircraft | List of aircraft by date and usage category | Physics | 119 |
64,555,856 | https://en.wikipedia.org/wiki/Melotte%20186 | Melotte 186 (also known as Collinder 359) is a large, loosely bound open cluster located in the constellation Ophiuchus. It has an apparent magnitude of 3.0 and an approximate size of 240 arc-minutes.
History
Due to its enormous size, this cluster was never recognized as such before the 20th century. The British astronomer Philibert Jacques Melotte was the first to notice it, who described it in his 1915 catalogue of star clusters as a large group of stars scattered around the star 67 Ophiuchi. In 1931, it was re-observed by Swedish astronomer Per Collinder, who described it as a group of 15 stars devoid of appreciable concentration, providing measurements of its member stars.
Characteristics
Mel 186 is an object of considerable size both real and apparent, which corresponds to a low concentration of its member stars. Its distance is controversial and the various estimates depend mainly on which stars are considered effective members or not; several estimates that indicated it as located at 249 parsecs (812 light years) are contrasted with more recent estimates that place it at as many as 450 parsecs (1467 light years).
Age is also the subject of debate, with measurements showing significant differences also here on the basis of which stars are considered as members; initial estimates have indicated an age of 20-30 million years, while more recent studies fix its origin at 100 million years, on the basis of measurements of as many as 628 possible star members with a mass between 1.3 and 0.03 M ⊙ . According to various studies it emerges that the Mel 186 stars have the same proper motion, average age and average distance as those of the nearby cluster IC 4665, suggesting a possible interaction between the two objects in the early stages of their existence; on the other hand, Mel 186 can also be seen as a scattered stellar association rather than a real open cluster due to the considerable distance between its components.
Observing
Melotte 186 is located in the northeastern part of the constellation Ophiuchus, just west of the star Beta Ophiuchi. Due to its declination close to the celestial equator, the cluster can be observed from any latitude of Earth. The best time to observe the cluster is between June and October.
The cluster is composed of magnitude 4-5 stars scattered over a 240 arc-minute region of the night sky. Though the open cluster is visible to the naked eye, it does not contrast well against the night sky due to its sparse appearance. Through binoculars one can resolve several dozen more stars down to magnitude 8, which are mainly concentrated around the eastern side of the cluster. Due to its large size, telescopes do not afford an improved view of the cluster.
The brighter stars make up the "face" of the former constellation Taurus Poniatovii, due to their V-shaped formation which resembles the Hyades cluster.
See also
Melotte 20
References
Open clusters
Ophiuchus | Melotte 186 | Astronomy | 597 |
13,906,212 | https://en.wikipedia.org/wiki/Methylketobemidone | Methylketobemidone is an opioid analgesic that is an analogue of ketobemidone. It was developed in the 1950s during research into analogues of pethidine and was assessed by the United Nations Office on Drugs and Crime but was not included on the list of drugs under international control, probably because it was not used in medicine or widely available.
Methylketobemidone is so named because it is the methyl ketone analogue of bemidone (hydroxypethidine). The more commonly used ethyl ketone ("ethylketobemidone") is simply called ketobemidone, as it is the only drug of this family to have been marketed.
Presumably methylketobemidone produces similar effects to pethidine, such as analgesia and sedation, along with side effects such as nausea, itching, vomiting and respiratory depression which may be harmful or fatal.
References
UNODC Bulletin on Narcotics 1954
Opioids
Synthetic opioids
3-Hydroxyphenyl compounds
Ketones
4-Phenylpiperidines
Mu-opioid receptor agonists | Methylketobemidone | Chemistry | 244 |
47,255,513 | https://en.wikipedia.org/wiki/Benzyltriethylammonium%20hydroxide | Benzyltriethylammonium hydroxide is a quaternary ammonium salt that functions as an organic base.
Uses
Together with benzyltrimethylammonium hydroxide, salts of benzyltriethylammonium are common phase-transfer catalysts.
References
Hydroxides
Quaternary ammonium compounds
Reagents for organic chemistry
Benzyl compounds | Benzyltriethylammonium hydroxide | Chemistry | 75 |
37,644,682 | https://en.wikipedia.org/wiki/Advances%20and%20Applications%20in%20Bioinformatics%20and%20Chemistry | Advances and Applications in Bioinformatics and Chemistry is a peer-reviewed scientific journal covering research in bioinformatics, especially as applied to chemistry, including computational biomodeling, molecular modeling, and systems biology. It was established in 2008 and is published by Dove Medical Press.
External links
English-language journals
Open access journals
Dove Medical Press academic journals
Bioinformatics and computational biology journals | Advances and Applications in Bioinformatics and Chemistry | Biology | 82 |
31,495,595 | https://en.wikipedia.org/wiki/Apollonicon | The Apollonicon was a self-acting barrel organ, built by the English Organ builders Flight & Robson in London and presented to the public the first time in 1817. Said to have been the biggest barrel and finger organ ever built, it was an automatic playing machine with about 1,900 pipes and 45 organ stops. It was inspired by Johann Nepomuk Mälzel's Panharmonikon. It also had five keyboards, one of them used as the pedal keyboard, so the instrument could be played by a few persons in manual mode as well.
A prototype for the machine was made by Flight & Robson in 1811 at the request of Lord Kirkwall, under the direction of the Earl's protégé, the blind organist John Purkis. (Purkis has previously composed at least on piece for the Panharmonikon). Impromptu demonstrations of this machine at 101 St Martin's Lane (the firm's showrooms) attracted thousands of people. The instrument was installed at Kirkwall's London home in Charles Street, Berkeley Square, where it impressed the Prince Regent (later King George IV) at a dinner party in 1813.
The success persuaded Flight & Robson to build a much larger self-playing machine, the Appollonicon. Purkis performed regular Saturday afternoon recitals on the instrument at St Martin's Lane for the next 21 years. Rachel Cowgill has called the Apollonicon recitals as "virtually synonymous with the establishment of the public organ recital in England....the first to be held in a secular venue and run on a purely commercial basis".
The St Martin's Lane lease expired in 1845 and the Apollonicon was dismantled and re-assembled at the Music Hall in the Strand. In the 1860s it was extended with a sixth console and moved again, to the Royal Music Hall, Lowther Arcade, off the Strand, in 1868.
A very detailed description with drawings can be found in the Mechanics Magazine from 1828. A notice about it is to be found in Polytechnisches Journal, 1828, with the Germanized name Apollonikon.
References
External links
Description at the British Institute of Organ Studies, with drawings and a bibliography
Description in The Victorian Dictionary
Keyboard instruments
Aerophones
Mechanical musical instruments
1817 introductions | Apollonicon | Physics,Technology | 463 |
12,506,327 | https://en.wikipedia.org/wiki/Saint%20Croix%20racer | The Saint Croix racer (Borikenophis sanctaecrucis) is a possibly extinct species of snake in the family Colubridae that is endemic to the island of Saint Croix in the United States Virgin Islands.
Etymology
The specific name, sanctaecrucis, refers to the island of Saint Croix, on which the holotype was collected.
Description
B. sanctaecrucis may attain a snout-to-vent length (SVL) of . It has smooth dorsal scales, which are arranged in 17 rows at midbody. The holotype has a total length of , which includes a tail long.
B. sanctaecrucis is oviparous.
Habitat
The preferred natural habitat of B. sanctaerucis is xeric forest.
Conservation
B. sanctaecrucis is feared extinct, as it has not been recorded in over 100 years since the holotype was collected. St. Croix is a densely-populated island, and the species is a fairy large snake. If it is extinct, the most probable causes were due to predation from introduced mongooses and deforestation of its habitat. However, recent rediscoveries of other Caribbean reptiles that were also thought extinct brings hope that a small population (probably less than 50 individuals) of B. sanctaecrucis survives somewhere in St. Croix.
References
Further reading
Boulenger GA (1894). Catalogue of the Snakes in the British Museum (Natural History). Volume II. Containing the Conclusion of the Colubridæ Aglyphæ. London: Trustees of the British Museum (Natural History). (Taylor and Francis, printers). xi + 382 pp. + Plates I-XX. (Dromicus sanctæ-crucis, new combination and emendation, p. 122).
Cope ED (1862). "Synopsis of the Species of Holcosus and Ameiva, with Diagnoses of new West Indian and South American Colubridæ". Proceedings of the Academy of Natural Sciences of Philadelphia 14: 60–82. (Alsophis sancticrucis, new species, p. 76).
Hedges SB, Couloux A, Vidal N (2009). "Molecular phylogeny, classification, and biogeography of West Indian racer snakes of the Tribe Alsophini (Squamata, Dipsadidae, Xenodontinae)". Zootaxa 2067: 1–28. (Borikenophis sanctaecrucis, new combination).
Schwartz A, Henderson RW (1991). Amphibians and Reptiles of the West Indies: Descriptions, Distributions, and Natural History. Gainesville: University of Florida Press. 720 pp. . (Alsophis sanctaecrucis, p. 576).
Schwartz A, Thomas R (1975). A Check-list of West Indian Amphibians and Reptiles. Carnegie Museum of Natural History Special Publication No. 1. Pittsburgh, Pennsylvania: Carnegie Museum of Natural History. 216 pp. (Alsophis sancticrucis, p. 173).
Borikenophis
Endemic fauna of the United States Virgin Islands
Reptiles of the United States Virgin Islands
Reptiles described in 1862
Taxa named by Edward Drinker Cope
Taxonomy articles created by Polbot
Species known from a single specimen | Saint Croix racer | Biology | 690 |
20,306,367 | https://en.wikipedia.org/wiki/George%20W.%20Housner | George W. Housner (December 9, 1910 in Saginaw, Michigan – November 10, 2008 in Pasadena, California) was a professor of earthquake engineering at the California Institute of Technology and National Medal of Science laureate.
Biography
Housner received his bachelor's degree in civil engineering from the University of Michigan where he was influenced by Stephen Timoshenko. He earned his masters' (1934) and doctoral (1941) degrees from the California Institute of Technology where he had been a professor of earthquake engineering from 1945 to 1981, and professor emeritus thereafter. In 2000, he received an honorary Doctor of Science from the University of Southern California.
Annually, in recognition of those who made extraordinary contributions to the earthquake safety research, practices and policies, EERI awards The George W. Housner Medal of the Earthquake Engineering Research Institute. On his death, Housner left a substantial gift to EERI "to advance the objectives of EERI". This gift has been used to train future earthquake engineering policy advocates and thought leaders through the EERI Housner Fellows Program, which has been active since 2011.
Housner died of natural causes November 10, 2008 in Pasadena, California at the age of 97.
Partial list of achievements
Chairman of the earthquake engineering research committee of the National Academy of Sciences
Founding Member of the Earthquake Engineering Research Institute
UNESCO representative to International Institute of Seismology and Earthquake Engineering in Tokyo
AEC advisory panel on safety against ground shock
AID consultant at University of Roorkee, India
Chairman of Geologic Hazards Advisory Committee for Organization for the California State Resources Agency
Chairman of Panel on Aseismic Design and Testing of Nuclear Facilities for International Atomic Energy Agency
Member of Los Angeles County Earthquake Commission
Member of Earthquake Engineering and Hazards Reduction Delegation to People's Republic of China
Consultant to Japanese Atomic Energy Commission and Italian Nuclear Energy Commission and numerous nuclear energy projects in the U.S.
International Association for Earthquake Engineering (IAEE) President (1969–1973)
Elected to National Academy of Sciences 1972
Named Braun Professor of Engineering at Caltech 1974
Chairman of Earthquake Earthquake Engineering Committee and the Committee on Dam Safety of the National Research Council (NRC)
Delivered second Mallet-Milne memorial lecture for Society for Earthquake and Civil Engineering Dynamics in London, 1989
References
Further reading
1910 births
2008 deaths
National Medal of Science laureates
20th-century American engineers
American structural engineers
Earthquake engineering
Members of the United States National Academy of Sciences
University of Michigan College of Engineering alumni | George W. Housner | Engineering | 492 |
40,790,542 | https://en.wikipedia.org/wiki/Heteroscorpine | Heteroscopine (HS-1) is the main component of the venom of Heterometrus laoticus. It belongs to the Scorpine toxin family. It is a polypeptide consisting of a defensin-like component on its N-terminal end and a putative potassium channel blocking component on its C-terminal end. It has antimicrobial effect on some bacteria, but not on fungi.
Sources
Heteroscorpine (HS-1) is a component of the venom of the Thai giant scorpion Heterometrus laoticus. This species is a member of the scorpion family commonly known as giant forest scorpions, indigenous to large parts of South and South-East Asia.
Chemistry
The gene coding for HS-1 consists of one intron flanked by two exons. HS-1 is a polypeptide consisting of 95 amino acids. The HS-1 protein has a large resemblance to other toxins of the Scorpine family (which is a subgroup of the Beta-KTx toxin family). The polypeptides of the Scorpine family possess two structural and functional domains: a N-terminal α-helix (which has a cytolytic and/or antimicrobial activity similar to that of insect defensins), and a C-terminal region with a CSαβ motif, which causes potassium channel-blocking activity. HS-1 is highly homologous in particular to the Scorpine toxin Panscorpine (from Emperor scorpion) and Opiscorpine (from Opistophthalmus carinatus), with an 80% similarity in amino acid sequence. Opiscorpine and HS-1 are both classified as scorpine-like peptides.
Based on its sequence homology with other scorpine-like peptides, HS-1 is likely to be a voltage-gated potassium channel blocker.
HS-1 also has antimicrobial effects on some bacterial species, i.e. Bacillus subtilis, Klebsiella pneumoniae and Pseudomonas aeruginosa; it has no inhibitory effects on fungi. The inhibitory effect on bacteria has no gram specificity. Scanning electron microscopy shows that HS-1 causes roughening and blebbing of bacterial cell surfaces. HS-1 contains three disulfide bridges followed by a typical Cys pattern, similar to that of invertebrate defensins. Thus, HS-1 is likely to act accordingly.
Toxicity
Symptoms from envenomation in humans from the Heterometrus genera are reported to be of mild severity. Sting can cause redness, swelling, inflammation and pain for hours up to a few days. Injection of the purified toxin in crickets causes paralysis.
References
Peptides
Scorpion toxins | Heteroscorpine | Chemistry | 576 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.