id stringlengths 2 8 | url stringlengths 31 117 | title stringlengths 1 71 | text stringlengths 153 118k | topic stringclasses 4
values | section stringlengths 4 49 ⌀ | sublist stringclasses 9
values |
|---|---|---|---|---|---|---|
12644 | https://en.wikipedia.org/wiki/Glycolysis | Glycolysis | Glycolysis is the metabolic pathway that converts glucose () into pyruvate and, in most organisms, occurs in the liquid part of cells (the cytosol). The free energy released in this process is used to form the high-energy molecules adenosine triphosphate (ATP) and reduced nicotinamide adenine dinucleotide (NADH). Glycolysis is a sequence of ten reactions catalyzed by enzymes.
The wide occurrence of glycolysis in other species indicates that it is an ancient metabolic pathway. Indeed, the reactions that make up glycolysis and its parallel pathway, the pentose phosphate pathway, can occur in the oxygen-free conditions of the Archean oceans, also in the absence of enzymes, catalyzed by metal ions, meaning this is a plausible prebiotic pathway for abiogenesis.
The most common type of glycolysis is the Embden–Meyerhof–Parnas (EMP) pathway, which was discovered by Gustav Embden, Otto Meyerhof, and Jakub Karol Parnas. Glycolysis also refers to other pathways, such as the Entner–Doudoroff pathway and various heterofermentative and homofermentative pathways. However, the discussion here will be limited to the Embden–Meyerhof–Parnas pathway.
The glycolysis pathway can be separated into two phases:
Investment phase – wherein ATP is consumed
Yield phase – wherein more ATP is produced than originally consumed
Overview
The overall reaction of glycolysis is:
The use of symbols in this equation makes it appear unbalanced with respect to oxygen atoms, hydrogen atoms, and charges. Atom balance is maintained by the two phosphate (Pi) groups:
Each exists in the form of a hydrogen phosphate anion (), dissociating to contribute overall
Each liberates an oxygen atom when it binds to an adenosine diphosphate (ADP) molecule, contributing 2O overall
Charges are balanced by the difference between ADP and ATP. In the cellular environment, all three hydroxyl groups of ADP dissociate into −O− and H+, giving ADP3−, and this ion tends to exist in an ionic bond with Mg2+, giving ADPMg−. ATP behaves identically except that it has four hydroxyl groups, giving ATPMg2−. When these differences along with the true charges on the two phosphate groups are considered together, the net charges of −4 on each side are balanced.
In high-oxygen (aerobic) conditions, eukaryotic cells can continue from glycolysis to metabolise the pyruvate through the citric acid cycle or the electron transport chain to produce significantly more ATP.
Importantly, under low-oxygen (anaerobic) conditions, glycolysis is the only biochemical pathway in eukaryotes that can generate ATP, and, for many anaerobic respiring organisms the most important producer of ATP. Therefore, many organisms have evolved fermentation pathways to recycle NAD+ to continue glycolysis to produce ATP for survival. These pathways include ethanol fermentation and lactic acid fermentation.
History
The modern understanding of the pathway of glycolysis took almost 100 years to fully learn. The combined results of many smaller experiments were required to understand the entire pathway.
The first steps in understanding glycolysis began in the 19th century. For economic reasons, the French wine industry sought to investigate why wine sometimes turned distasteful, instead of fermenting into alcohol. The French scientist Louis Pasteur researched this issue during the 1850s. His experiments showed that alcohol fermentation occurs by the action of living microorganisms, yeasts, and that glucose consumption decreased under aerobic conditions (the Pasteur effect).
The component steps of glycolysis were first analysed by the non-cellular fermentation experiments of Eduard Buchner during the 1890s. Buchner demonstrated that the conversion of glucose to ethanol was possible using a non-living extract of yeast, due to the action of enzymes in the extract. This experiment not only revolutionized biochemistry, but also allowed later scientists to analyze this pathway in a more controlled laboratory setting. In a series of experiments (1905–1911), scientists Arthur Harden and William Young discovered more pieces of glycolysis. They discovered the regulatory effects of ATP on glucose consumption during alcohol fermentation. They also shed light on the role of one compound as a glycolysis intermediate: fructose 1,6-bisphosphate.
The elucidation of fructose 1,6-bisphosphate was accomplished by measuring levels when yeast juice was incubated with glucose. production increased rapidly then slowed down. Harden and Young noted that this process would restart if an inorganic phosphate (Pi) was added to the mixture. Harden and Young deduced that this process produced organic phosphate esters, and further experiments allowed them to extract fructose diphosphate (F-1,6-DP).
Arthur Harden and William Young along with Nick Sheppard determined, in a second experiment, that a heat-sensitive high-molecular-weight subcellular fraction (the enzymes) and a heat-insensitive low-molecular-weight cytoplasm fraction (ADP, ATP and NAD+ and other cofactors) are required together for fermentation to proceed. This experiment begun by observing that dialyzed (purified) yeast juice could not ferment or even create a sugar phosphate. This mixture was rescued with the addition of undialyzed yeast extract that had been boiled. Boiling the yeast extract renders all proteins inactive (as it denatures them). The ability of boiled extract plus dialyzed juice to complete fermentation suggests that the cofactors were non-protein in character.
In the 1920s Otto Meyerhof was able to link together some of the many individual pieces of glycolysis discovered by Buchner, Harden, and Young. Meyerhof and his team were able to extract different glycolytic enzymes from muscle tissue, and combine them to artificially create the pathway from glycogen to lactic acid.
In one paper, Meyerhof and scientist Renate Junowicz-Kockolaty investigated the reaction that splits fructose 1,6-diphosphate into the two triose phosphates. Previous work proposed that the split occurred via 1,3-diphosphoglyceraldehyde plus an oxidizing enzyme and cozymase. Meyerhoff and Junowicz found that the equilibrium constant for the isomerase and aldoses reaction were not affected by inorganic phosphates or any other cozymase or oxidizing enzymes. They further removed diphosphoglyceraldehyde as a possible intermediate in glycolysis.
With all of these pieces available by the 1930s, Gustav Embden proposed a detailed, step-by-step outline of that pathway we now know as glycolysis. The biggest difficulties in determining the intricacies of the pathway were due to the very short lifetime and low steady-state concentrations of the intermediates of the fast glycolytic reactions. By the 1940s, Meyerhof, Embden and many other biochemists had finally completed the puzzle of glycolysis. The understanding of the isolated pathway has been expanded in the subsequent decades, to include further details of its regulation and integration with other metabolic pathways.
Sequence of reactions
Summary of reactions
Preparatory phase
The first five steps of Glycolysis are regarded as the preparatory (or investment) phase, since they consume energy to convert the glucose into two three-carbon sugar phosphates (G3P).
Once glucose enters the cell, the first step is phosphorylation of glucose by a family of enzymes called hexokinases to form glucose 6-phosphate (G6P). This reaction consumes ATP, but it acts to keep the glucose concentration inside the cell low, promoting continuous transport of blood glucose into the cell through the plasma membrane transporters. In addition, phosphorylation blocks the glucose from leaking out – the cell lacks transporters for G6P, and free diffusion out of the cell is prevented due to the charged nature of G6P. Glucose may alternatively be formed from the phosphorolysis or hydrolysis of intracellular starch or glycogen.
In animals, an isozyme of hexokinase called glucokinase is also used in the liver, which has a much lower affinity for glucose (Km in the vicinity of normal glycemia), and differs in regulatory properties. The different substrate affinity and alternate regulation of this enzyme are a reflection of the role of the liver in maintaining blood sugar levels.
Cofactors: Mg2+
G6P is then rearranged into fructose 6-phosphate (F6P) by glucose phosphate isomerase. Fructose can also enter the glycolytic pathway by phosphorylation at this point.
The change in structure is an isomerization, in which the G6P has been converted to F6P. The reaction requires an enzyme, phosphoglucose isomerase, to proceed. This reaction is freely reversible under normal cell conditions. However, it is often driven forward because of a low concentration of F6P, which is constantly consumed during the next step of glycolysis. Under conditions of high F6P concentration, this reaction readily runs in reverse. This phenomenon can be explained through Le Chatelier's Principle. Isomerization to a keto sugar is necessary for carbanion stabilization in the fourth reaction step (below).
The energy expenditure of another ATP in this step is justified in 2 ways: The glycolytic process (up to this step) becomes irreversible, and the energy supplied destabilizes the molecule. Because the reaction catalyzed by phosphofructokinase 1 (PFK-1) is coupled to the hydrolysis of ATP (an energetically favorable step) it is, in essence, irreversible, and a different pathway must be used to do the reverse conversion during gluconeogenesis. This makes the reaction a key regulatory point (see below).
Furthermore, the second phosphorylation event is necessary to allow the formation of two charged groups (rather than only one) in the subsequent step of glycolysis, ensuring the prevention of free diffusion of substrates out of the cell.
The same reaction can also be catalyzed by pyrophosphate-dependent phosphofructokinase (PFP or PPi-PFK), which is found in most plants, some bacteria, archea, and protists, but not in animals. This enzyme uses pyrophosphate (PPi) as a phosphate donor instead of ATP. It is a reversible reaction, increasing the flexibility of glycolytic metabolism. A rarer ADP-dependent PFK enzyme variant has been identified in archaean species.
Cofactors: Mg2+
Destabilizing the molecule in the previous reaction allows the hexose ring to be split by aldolase into two triose sugars: dihydroxyacetone phosphate (a ketose), and glyceraldehyde 3-phosphate (an aldose). There are two classes of aldolases: class I aldolases, present in animals and plants, and class II aldolases, present in fungi and bacteria; the two classes use different mechanisms in cleaving the ketose ring.
Electrons delocalized in the carbon-carbon bond cleavage associate with the alcohol group. The resulting carbanion is stabilized by the structure of the carbanion itself via resonance charge distribution and by the presence of a charged ion prosthetic group.
Triosephosphate isomerase rapidly interconverts dihydroxyacetone phosphate with glyceraldehyde 3-phosphate (GADP) that proceeds further into glycolysis. This is advantageous, as it directs dihydroxyacetone phosphate down the same pathway as glyceraldehyde 3-phosphate, simplifying regulation.
Pay-off phase
The second half of glycolysis is known as the pay-off phase, characterised by a net gain of the energy-rich molecules ATP and NADH. Since glucose leads to two triose sugars in the preparatory phase, each reaction in the pay-off phase occurs twice per glucose molecule. This yields 2 NADH molecules and 4 ATP molecules, leading to a net gain of 2 NADH molecules and 2 ATP molecules from the glycolytic pathway per glucose.
The aldehyde groups of the triose sugars are oxidised, and inorganic phosphate is added to them, forming 1,3-bisphosphoglycerate.
The hydrogen is used to reduce two molecules of NAD+, a hydrogen carrier, to give NADH + H+ for each triose.
Hydrogen atom balance and charge balance are both maintained because the phosphate (Pi) group actually exists in the form of a hydrogen phosphate anion (), which dissociates to contribute the extra H+ ion and gives a net charge of -3 on both sides.
Here, arsenate (), an anion akin to inorganic phosphate may replace phosphate as a substrate to form 1-arseno-3-phosphoglycerate. This, however, is unstable and readily hydrolyzes to form 3-phosphoglycerate, the intermediate in the next step of the pathway. As a consequence of bypassing this step, the molecule of ATP generated from 1-3 bisphosphoglycerate in the next reaction will not be made, even though the reaction proceeds. As a result, arsenate is an uncoupler of glycolysis.
This step is the enzymatic transfer of a phosphate group from 1,3-bisphosphoglycerate to ADP by phosphoglycerate kinase, forming ATP and 3-phosphoglycerate. At this step, glycolysis has reached the break-even point: 2 molecules of ATP were consumed, and 2 new molecules have now been synthesized. This step, one of the two substrate-level phosphorylation steps, requires ADP; thus, when the cell has plenty of ATP (and little ADP), this reaction does not occur. Because ATP decays relatively quickly when it is not metabolized, this is an important regulatory point in the glycolytic pathway.
ADP actually exists as ADPMg−, and ATP as ATPMg2−, balancing the charges at −5 both sides.
Cofactors: Mg2+
Phosphoglycerate mutase isomerises 3-phosphoglycerate into 2-phosphoglycerate.
Enolase next converts 2-phosphoglycerate to phosphoenolpyruvate. This reaction is an elimination reaction involving an E1cB mechanism.
Cofactors: 2 Mg2+, one "conformational" ion to coordinate with the carboxylate group of the substrate, and one "catalytic" ion that participates in the dehydration.
A final substrate-level phosphorylation now forms a molecule of pyruvate and a molecule of ATP by means of the enzyme pyruvate kinase. This serves as an additional regulatory step, similar to the phosphoglycerate kinase step.
Cofactors: Mg2+
Biochemical logic
The existence of more than one point of regulation indicates that intermediates between those points enter and leave the glycolysis pathway by other processes. For example, in the first regulated step, hexokinase converts glucose into glucose-6-phosphate. Instead of continuing through the glycolysis pathway, this intermediate can be converted into glucose storage molecules, such as glycogen or starch. The reverse reaction, breaking down, e.g., glycogen, produces mainly glucose-6-phosphate; very little free glucose is formed in the reaction. The glucose-6-phosphate so produced can enter glycolysis after the first control point.
In the second regulated step (the third step of glycolysis), phosphofructokinase converts fructose-6-phosphate into fructose-1,6-bisphosphate, which then is converted into glyceraldehyde-3-phosphate and dihydroxyacetone phosphate. The dihydroxyacetone phosphate can be removed from glycolysis by conversion into glycerol-3-phosphate, which can be used to form triglycerides. Conversely, triglycerides can be broken down into fatty acids and glycerol; the latter, in turn, can be converted into dihydroxyacetone phosphate, which can enter glycolysis after the second control point.
Free energy changes
The change in free energy, ΔG, for each step in the glycolysis pathway can be calculated using ΔG = ΔG°′ + RTln Q, where Q is the reaction quotient. This requires knowing the concentrations of the metabolites. All of these values are available for erythrocytes, with the exception of the concentrations of NAD+ and NADH. The ratio of NAD+ to NADH in the cytoplasm is approximately 1000, which makes the oxidation of glyceraldehyde-3-phosphate (step 6) more favourable.
Using the measured concentrations of each step, and the standard free energy changes, the actual free energy change can be calculated. (Neglecting this is very common—the delta G of ATP hydrolysis in cells is not the standard free energy change of ATP hydrolysis quoted in textbooks).
From measuring the physiological concentrations of metabolites in an erythrocyte it seems that about seven of the steps in glycolysis are in equilibrium for that cell type. Three of the steps—the ones with large negative free energy changes—are not in equilibrium and are referred to as irreversible; such steps are often subject to regulation.
Step 5 in the figure is shown behind the other steps, because that step is a side-reaction that can decrease or increase the concentration of the intermediate glyceraldehyde-3-phosphate. That compound is converted to dihydroxyacetone phosphate by the enzyme triose phosphate isomerase, which is a catalytically perfect enzyme; its rate is so fast that the reaction can be assumed to be in equilibrium. The fact that ΔG is not zero indicates that the actual concentrations in the erythrocyte are not accurately known.
Regulation
The enzymes that catalyse glycolysis are regulated via a range of biological mechanisms in order to control overall flux though the pathway. This is vital for both homeostatsis in a static environment, and metabolic adaptation to a changing environment or need. The details of regulation for some enzymes are highly conserved between species, whereas others vary widely.
Gene Expression: Firstly, the cellular concentrations of glycolytic enzymes are modulated via regulation of gene expression via transcription factors, with several glycolysis enzymes themselves acting as regulatory protein kinases in the nucleus.
Allosteric inhibition and activation by metabolites: In particular end-product inhibition of regulated enzymes by metabolites such as ATP serves as negative feedback regulation of the pathway.
Allosteric inhibition and activation by Protein-protein interactions (PPI). Indeed, some proteins interact with and regulate multiple glycolytic enzymes.
Post-translational modification (PTM). In particular, phosphorylation and dephosphorylation is a key mechanism of regulation of pyruvate kinase in the liver.
Localization
Regulation by insulin in animals
In animals, regulation of blood glucose levels by the pancreas in conjunction with the liver is a vital part of homeostasis. The beta cells in the pancreatic islets are sensitive to the blood glucose concentration. A rise in the blood glucose concentration causes them to release insulin into the blood, which has an effect particularly on the liver, but also on fat and muscle cells, causing these tissues to remove glucose from the blood. When the blood sugar falls the pancreatic beta cells cease insulin production, but, instead, stimulate the neighboring pancreatic alpha cells to release glucagon into the blood. This, in turn, causes the liver to release glucose into the blood by breaking down stored glycogen, and by means of gluconeogenesis. If the fall in the blood glucose level is particularly rapid or severe, other glucose sensors cause the release of epinephrine from the adrenal glands into the blood. This has the same action as glucagon on glucose metabolism, but its effect is more pronounced. In the liver glucagon and epinephrine cause the phosphorylation of the key, regulated enzymes of glycolysis, fatty acid synthesis, cholesterol synthesis, gluconeogenesis, and glycogenolysis. Insulin has the opposite effect on these enzymes. The phosphorylation and dephosphorylation of these enzymes (ultimately in response to the glucose level in the blood) is the dominant manner by which these pathways are controlled in the liver, fat, and muscle cells. Thus the phosphorylation of phosphofructokinase inhibits glycolysis, whereas its dephosphorylation through the action of insulin stimulates glycolysis.
Regulated Enzymes in Glycolysis
The three regulatory enzymes are hexokinase (or glucokinase in the liver), phosphofructokinase, and pyruvate kinase. The flux through the glycolytic pathway is adjusted in response to conditions both inside and outside the cell. The internal factors that regulate glycolysis do so primarily to provide ATP in adequate quantities for the cell's needs. The external factors act primarily on the liver, fat tissue, and muscles, which can remove large quantities of glucose from the blood after meals (thus preventing hyperglycemia by storing the excess glucose as fat or glycogen, depending on the tissue type). The liver is also capable of releasing glucose into the blood between meals, during fasting, and exercise thus preventing hypoglycemia by means of glycogenolysis and gluconeogenesis. These latter reactions coincide with the halting of glycolysis in the liver.
In addition hexokinase and glucokinase act independently of the hormonal effects as controls at the entry points of glucose into the cells of different tissues. Hexokinase responds to the glucose-6-phosphate (G6P) level in the cell, or, in the case of glucokinase, to the blood sugar level in the blood to impart entirely intracellular controls of the glycolytic pathway in different tissues (see below).
When glucose has been converted into G6P by hexokinase or glucokinase, it can either be converted to glucose-1-phosphate (G1P) for conversion to glycogen, or it is alternatively converted by glycolysis to pyruvate, which enters the mitochondrion where it is converted into acetyl-CoA and then into citrate. Excess citrate is exported from the mitochondrion back into the cytosol, where ATP citrate lyase regenerates acetyl-CoA and oxaloacetate (OAA). The acetyl-CoA is then used for fatty acid synthesis and cholesterol synthesis, two important ways of utilizing excess glucose when its concentration is high in blood. The regulated enzymes catalyzing these reactions perform these functions when they have been dephosphorylated through the action of insulin on the liver cells. Between meals, during fasting, exercise or hypoglycemia, glucagon and epinephrine are released into the blood. This causes liver glycogen to be converted back to G6P, and then converted to glucose by the liver-specific enzyme glucose 6-phosphatase and released into the blood. Glucagon and epinephrine also stimulate gluconeogenesis, which converts non-carbohydrate substrates into G6P, which joins the G6P derived from glycogen, or substitutes for it when the liver glycogen store have been depleted. This is critical for brain function, since the brain utilizes glucose as an energy source under most conditions. The simultaneously phosphorylation of, particularly, phosphofructokinase, but also, to a certain extent pyruvate kinase, prevents glycolysis occurring at the same time as gluconeogenesis and glycogenolysis.
Hexokinase and glucokinase
All cells contain the enzyme hexokinase, which catalyzes the conversion of glucose that has entered the cell into glucose-6-phosphate (G6P). Since the cell membrane is impervious to G6P, hexokinase essentially acts to transport glucose into the cells from which it can then no longer escape. Hexokinase is inhibited by high levels of G6P in the cell. Thus the rate of entry of glucose into cells partially depends on how fast G6P can be disposed of by glycolysis, and by glycogen synthesis (in the cells which store glycogen, namely liver and muscles).
Glucokinase, unlike hexokinase, is not inhibited by G6P. It occurs in liver cells, and will only phosphorylate the glucose entering the cell to form G6P, when the glucose in the blood is abundant. This being the first step in the glycolytic pathway in the liver, it therefore imparts an additional layer of control of the glycolytic pathway in this organ.
Phosphofructokinase
Phosphofructokinase is an important control point in the glycolytic pathway, since it is one of the irreversible steps and has key allosteric effectors, AMP and fructose 2,6-bisphosphate (F2,6BP).
F2,6BP is a very potent activator of phosphofructokinase (PFK-1) that is synthesized when F6P is phosphorylated by a second phosphofructokinase (PFK2). In the liver, when blood sugar is low and glucagon elevates cAMP, PFK2 is phosphorylated by protein kinase A. The phosphorylation inactivates PFK2, and another domain on this protein becomes active as fructose bisphosphatase-2, which converts F2,6BP back to F6P. Both glucagon and epinephrine cause high levels of cAMP in the liver. The result of lower levels of liver F2,6BP is a decrease in activity of phosphofructokinase and an increase in activity of fructose 1,6-bisphosphatase, so that gluconeogenesis (in essence, "glycolysis in reverse") is favored. This is consistent with the role of the liver in such situations, since the response of the liver to these hormones is to release glucose to the blood.
ATP competes with AMP for the allosteric effector site on the PFK enzyme. ATP concentrations in cells are much higher than those of AMP, typically 100-fold higher, but the concentration of ATP does not change more than about 10% under physiological conditions, whereas a 10% drop in ATP results in a 6-fold increase in AMP. Thus, the relevance of ATP as an allosteric effector is questionable. An increase in AMP is a consequence of a decrease in energy charge in the cell.
Citrate inhibits phosphofructokinase when tested in vitro by enhancing the inhibitory effect of ATP. However, it is doubtful that this is a meaningful effect in vivo, because citrate in the cytosol is utilized mainly for conversion to acetyl-CoA for fatty acid and cholesterol synthesis.
TIGAR, a p53 induced enzyme, is responsible for the regulation of phosphofructokinase and acts to protect against oxidative stress. TIGAR is a single enzyme with dual function that regulates F2,6BP. It can behave as a phosphatase (fructuose-2,6-bisphosphatase) which cleaves the phosphate at carbon-2 producing F6P. It can also behave as a kinase (PFK2) adding a phosphate onto carbon-2 of F6P which produces F2,6BP. In humans, the TIGAR protein is encoded by C12orf5 gene. The TIGAR enzyme will hinder the forward progression of glycolysis, by creating a build up of fructose-6-phosphate (F6P) which is isomerized into glucose-6-phosphate (G6P). The accumulation of G6P will shunt carbons into the pentose phosphate pathway.
Pyruvate kinase
The final step of glycolysis is catalysed by pyruvate kinase to form pyruvate and another ATP. It is regulated by a range of different transcriptional, covalent and non-covalent regulation mechanisms, which can vary widely in different tissues. For example, in the liver, pyruvate kinase is regulated based on glucose availability. During fasting (no glucose available), glucagon activates protein kinase A which phosphorylates pyruvate kinase to inhibit it. An increase in blood sugar leads to secretion of insulin, which activates protein phosphatase 1, leading to dephosphorylation and re-activation of pyruvate kinase. These controls prevent pyruvate kinase from being active at the same time as the enzymes that catalyze the reverse reaction (pyruvate carboxylase and phosphoenolpyruvate carboxykinase), preventing a futile cycle. Conversely, the isoform of pyruvate kinasein found in muscle is not affected by protein kinase A (which is activated by adrenaline in that tissue), so that glycolysis remains active in muscles even during fasting.
Post-glycolysis processes
The overall process of glycolysis is:
Glucose + 2 NAD+ + 2 ADP + 2 Pi → 2 Pyruvate + 2 NADH + 2 H+ + 2 ATP + 2 H2O
If glycolysis were to continue indefinitely, all of the NAD+ would be used up, and glycolysis would stop. To allow glycolysis to continue, organisms must be able to oxidize NADH back to NAD+. How this is performed depends on which external electron acceptor is available.
Anoxic regeneration of NAD+
One method of doing this is to simply have the pyruvate do the oxidation; in this process, pyruvate is converted to lactate (the conjugate base of lactic acid) in a process called lactic acid fermentation:
Pyruvate + NADH + H+ → Lactate + NAD+
This process occurs in the bacteria involved in making yogurt (the lactic acid causes the milk to curdle). This process also occurs in animals under hypoxic (or partially anaerobic) conditions, found, for example, in overworked muscles that are starved of oxygen. In many tissues, this is a cellular last resort for energy; most animal tissue cannot tolerate anaerobic conditions for an extended period of time.
Some organisms, such as yeast, convert NADH back to NAD+ in a process called ethanol fermentation. In this process, the pyruvate is converted first to acetaldehyde and carbon dioxide, and then to ethanol.
Lactic acid fermentation and ethanol fermentation can occur in the absence of oxygen. This anaerobic fermentation allows many single-cell organisms to use glycolysis as their only energy source.
Anoxic regeneration of NAD+ is only an effective means of energy production during short, intense exercise in vertebrates, for a period ranging from 10 seconds to 2 minutes during a maximal effort in humans. (At lower exercise intensities it can sustain muscle activity in diving animals, such as seals, whales and other aquatic vertebrates, for very much longer periods of time.) Under these conditions NAD+ is replenished by NADH donating its electrons to pyruvate to form lactate. This produces 2 ATP molecules per glucose molecule, or about 5% of glucose's energy potential (38 ATP molecules in bacteria). But the speed at which ATP is produced in this manner is about 100 times that of oxidative phosphorylation. The pH in the cytoplasm quickly drops when hydrogen ions accumulate in the muscle, eventually inhibiting the enzymes involved in glycolysis.
The burning sensation in muscles during hard exercise can be attributed to the release of hydrogen ions during the shift to glucose fermentation from glucose oxidation to carbon dioxide and water, when aerobic metabolism can no longer keep pace with the energy demands of the muscles. These hydrogen ions form a part of lactic acid. The body falls back on this less efficient but faster method of producing ATP under low oxygen conditions. This is thought to have been the primary means of energy production in earlier organisms before oxygen reached high concentrations in the atmosphere between 2000 and 2500 million years ago, and thus would represent a more ancient form of energy production than the aerobic replenishment of NAD+ in cells.
The liver in mammals gets rid of this excess lactate by transforming it back into pyruvate under aerobic conditions; see Cori cycle.
Fermentation of pyruvate to lactate is sometimes also called "anaerobic glycolysis", however, glycolysis ends with the production of pyruvate regardless of the presence or absence of oxygen.
In the above two examples of fermentation, NADH is oxidized by transferring two electrons to pyruvate. However, anaerobic bacteria use a wide variety of compounds as the terminal electron acceptors in cellular respiration: nitrogenous compounds, such as nitrates and nitrites; sulfur compounds, such as sulfates, sulfites, sulfur dioxide, and elemental sulfur; carbon dioxide; iron compounds; manganese compounds; cobalt compounds; and uranium compounds.
Aerobic regeneration of NAD+ and further catabolism of pyruvate
In aerobic eukaryotes, a complex mechanism has developed to use the oxygen in air as the final electron acceptor, in a process called oxidative phosphorylation. Aerobic prokaryotes, which lack mitochondria, use a variety of simpler mechanisms.
Firstly, the NADH + H+ generated by glycolysis has to be transferred to the mitochondrion to be oxidized, and thus to regenerate the NAD+ necessary for glycolysis to continue. However the inner mitochondrial membrane is impermeable to NADH and NAD+. Use is therefore made of two "shuttles" to transport the electrons from NADH across the mitochondrial membrane. They are the malate-aspartate shuttle and the glycerol phosphate shuttle. In the former the electrons from NADH are transferred to cytosolic oxaloacetate to form malate. The malate then traverses the inner mitochondrial membrane into the mitochondrial matrix, where it is reoxidized by NAD+ forming intra-mitochondrial oxaloacetate and NADH. The oxaloacetate is then re-cycled to the cytosol via its conversion to aspartate which is readily transported out of the mitochondrion. In the glycerol phosphate shuttle electrons from cytosolic NADH are transferred to dihydroxyacetone to form glycerol-3-phosphate which readily traverses the outer mitochondrial membrane. Glycerol-3-phosphate is then reoxidized to dihydroxyacetone, donating its electrons to FAD instead of NAD+. This reaction takes place on the inner mitochondrial membrane, allowing FADH2 to donate its electrons directly to coenzyme Q (ubiquinone) which is part of the electron transport chain which ultimately transfers electrons to molecular oxygen , with the formation of water, and the release of energy eventually captured in the form of ATP.
The glycolytic end-product, pyruvate (plus NAD+) is converted to acetyl-CoA, and NADH + H+ within the mitochondria in a process called pyruvate decarboxylation.
The resulting acetyl-CoA enters the citric acid cycle (or Krebs Cycle), where the acetyl group of the acetyl-CoA is converted into carbon dioxide by two decarboxylation reactions with the formation of yet more intra-mitochondrial NADH + H+.
The intra-mitochondrial NADH + H+ is oxidized to NAD+ by the electron transport chain, using oxygen as the final electron acceptor to form water. The energy released during this process is used to create a hydrogen ion (or proton) gradient across the inner membrane of the mitochondrion.
Finally, the proton gradient is used to produce about 2.5 ATP for every NADH + H+ oxidized in a process called oxidative phosphorylation.
Conversion of carbohydrates into fatty acids and cholesterol
The pyruvate produced by glycolysis is an important intermediary in the conversion of carbohydrates into fatty acids and cholesterol. This occurs via the conversion of pyruvate into acetyl-CoA in the mitochondrion. However, this acetyl CoA needs to be transported into cytosol where the synthesis of fatty acids and cholesterol occurs. This cannot occur directly. To obtain cytosolic acetyl-CoA, citrate (produced by the condensation of acetyl CoA with oxaloacetate) is removed from the citric acid cycle and carried across the inner mitochondrial membrane into the cytosol. There it is cleaved by ATP citrate lyase into acetyl-CoA and oxaloacetate. The oxaloacetate is returned to mitochondrion as malate (and then back into oxaloacetate to transfer more acetyl-CoA out of the mitochondrion). The cytosolic acetyl-CoA can be carboxylated by acetyl-CoA carboxylase into malonyl CoA, the first committed step in the synthesis of fatty acids, or it can be combined with acetoacetyl-CoA to form 3-hydroxy-3-methylglutaryl-CoA (HMG-CoA) which is the rate limiting step controlling the synthesis of cholesterol. Cholesterol can be used as is, as a structural component of cellular membranes, or it can be used to synthesize the steroid hormones, bile salts, and vitamin D.
Conversion of pyruvate into oxaloacetate for the citric acid cycle
Pyruvate molecules produced by glycolysis are actively transported across the inner mitochondrial membrane, and into the matrix where they can either be oxidized and combined with coenzyme A to form , acetyl-CoA, and NADH, or they can be carboxylated (by pyruvate carboxylase) to form oxaloacetate. This latter reaction "fills up" the amount of oxaloacetate in the citric acid cycle, and is therefore an anaplerotic reaction (from the Greek meaning to "fill up"), increasing the cycle's capacity to metabolize acetyl-CoA when the tissue's energy needs (e.g. in heart and skeletal muscle) are suddenly increased by activity.
In the citric acid cycle all the intermediates (e.g. citrate, iso-citrate, alpha-ketoglutarate, succinate, fumarate, malate and oxaloacetate) are regenerated during each turn of the cycle. Adding more of any of these intermediates to the mitochondrion therefore means that that additional amount is retained within the cycle, increasing all the other intermediates as one is converted into the other. Hence the addition of oxaloacetate greatly increases the amounts of all the citric acid intermediates, thereby increasing the cycle's capacity to metabolize acetyl CoA, converting its acetate component into and water, with the release of enough energy to form 11 ATP and 1 GTP molecule for each additional molecule of acetyl CoA that combines with oxaloacetate in the cycle.
To cataplerotically remove oxaloacetate from the citric cycle, malate can be transported from the mitochondrion into the cytoplasm, decreasing the amount of oxaloacetate that can be regenerated. Furthermore, citric acid intermediates are constantly used to form a variety of substances such as the purines, pyrimidines and porphyrins.
Intermediates for other pathways
This article concentrates on the catabolic role of glycolysis with regard to converting potential chemical energy to usable chemical energy during the oxidation of glucose to pyruvate. Many of the metabolites in the glycolytic pathway are also used by anabolic pathways, and, as a consequence, flux through the pathway is critical to maintain a supply of carbon skeletons for biosynthesis.
The following metabolic pathways are all strongly reliant on glycolysis as a source of metabolites: and many more.
Pentose phosphate pathway, which begins with the dehydrogenation of glucose-6-phosphate, the first intermediate to be produced by glycolysis, produces various pentose sugars, and NADPH for the synthesis of fatty acids and cholesterol.
Glycogen synthesis also starts with glucose-6-phosphate at the beginning of the glycolytic pathway.
Glycerol, for the formation of triglycerides and phospholipids, is produced from the glycolytic intermediate glyceraldehyde-3-phosphate.
Various post-glycolytic pathways:
Fatty acid synthesis
Cholesterol synthesis
The citric acid cycle which in turn leads to:
Amino acid synthesis
Nucleotide synthesis
Tetrapyrrole synthesis
Although gluconeogenesis and glycolysis share many intermediates the one is not functionally a branch or tributary of the other. There are two regulatory steps in both pathways which, when active in the one pathway, are automatically inactive in the other. The two processes can therefore not be simultaneously active. Indeed, if both sets of reactions were highly active at the same time the net result would be the hydrolysis of four high energy phosphate bonds (two ATP and two GTP) per reaction cycle.
NAD+ is the oxidizing agent in glycolysis, as it is in most other energy yielding metabolic reactions (e.g. beta-oxidation of fatty acids, and during the citric acid cycle). The NADH thus produced is primarily used to ultimately transfer electrons to to produce water, or, when is not available, to produce compounds such as lactate or ethanol (see Anoxic regeneration of NAD+ above). NADH is rarely used for synthetic processes, the notable exception being gluconeogenesis. During fatty acid and cholesterol synthesis the reducing agent is NADPH. This difference exemplifies a general principle that NADPH is consumed during biosynthetic reactions, whereas NADH is generated in energy-yielding reactions. The source of the NADPH is two-fold. When malate is oxidatively decarboxylated by "NADP+-linked malic enzyme" pyruvate, and NADPH are formed. NADPH is also formed by the pentose phosphate pathway which converts glucose into ribose, which can be used in synthesis of nucleotides and nucleic acids, or it can be catabolized to pyruvate.
Glycolysis in disease
Diabetes
Cellular uptake of glucose occurs in response to insulin signals, and glucose is subsequently broken down through glycolysis, lowering blood sugar levels. However, insulin resistance or low insulin levels seen in diabetes result in hyperglycemia, where glucose levels in the blood rise and glucose is not properly taken up by cells. Hepatocytes further contribute to this hyperglycemia through gluconeogenesis. Glycolysis in hepatocytes controls hepatic glucose production, and when glucose is overproduced by the liver without having a means of being broken down by the body, hyperglycemia results.
Genetic diseases
Glycolytic mutations are generally rare due to importance of the metabolic pathway; the majority of occurring mutations result in an inability of the cell to respire, and therefore cause the death of the cell at an early stage. However, some mutations (glycogen storage diseases and other inborn errors of carbohydrate metabolism) are seen with one notable example being pyruvate kinase deficiency, leading to chronic hemolytic anemia.
In combined malonic and methylmalonic aciduria (CMAMMA) due to ACSF3 deficiency, glycolysis is reduced by -50%, which is caused by reduced lipoylation of mitochondrial enzymes such as the pyruvate dehydrogenase complex and α-ketoglutarate dehydrogenase complex.
Cancer
Malignant tumor cells perform glycolysis at a rate that is ten times faster than their noncancerous tissue counterparts. During their genesis, limited capillary support often results in hypoxia (decreased O2 supply) within the tumor cells. Thus, these cells rely on anaerobic metabolic processes such as glycolysis for ATP (adenosine triphosphate). Some tumor cells overexpress specific glycolytic enzymes which result in higher rates of glycolysis. Often these enzymes are Isoenzymes, of traditional glycolysis enzymes, that vary in their susceptibility to traditional feedback inhibition. The increase in glycolytic activity ultimately counteracts the effects of hypoxia by generating sufficient ATP from this anaerobic pathway. This phenomenon was first described in 1930 by Otto Warburg and is referred to as the Warburg effect. The Warburg hypothesis claims that cancer is primarily caused by dysfunctionality in mitochondrial metabolism, rather than because of the uncontrolled growth of cells.
A number of theories have been advanced to explain the Warburg effect. One such theory suggests that the increased glycolysis is a normal protective process of the body and that malignant change could be primarily caused by energy metabolism.
This high glycolysis rate has important medical applications, as high aerobic glycolysis by malignant tumors is utilized clinically to diagnose and monitor treatment responses of cancers by imaging uptake of 2-18F-2-deoxyglucose (FDG) (a radioactive modified hexokinase substrate) with positron emission tomography (PET).
There is ongoing research to affect mitochondrial metabolism and treat cancer by reducing glycolysis and thus starving cancerous cells in various new ways, including a ketogenic diet.
Interactive pathway map
The diagram below shows human protein names. Names in other organisms may be different and the number of isozymes (such as HK1, HK2, ...) is likely to be different too.
Alternative nomenclature
Some of the metabolites in glycolysis have alternative names and nomenclature. In part, this is because some of them are common to other pathways, such as the Calvin cycle.
Structure of glycolysis components in Fischer projections and polygonal model
The intermediates of glycolysis depicted in Fischer projections show the chemical changing step by step. Such image can be compared to polygonal model representation.
| Biology and health sciences | Cell processes | null |
12666 | https://en.wikipedia.org/wiki/Gluon | Gluon | A gluon ( ) is a type of massless elementary particle that mediates the strong interaction between quarks, acting as the exchange particle for the interaction. Gluons are massless vector bosons, thereby having a spin of 1. Through the strong interaction, gluons bind quarks into groups according to quantum chromodynamics (QCD), forming hadrons such as protons and neutrons.
Gluons carry the color charge of the strong interaction, thereby participating in the strong interaction as well as mediating it. Because gluons carry the color charge, QCD is more difficult to analyze compared to quantum electrodynamics (QED) where the photon carries no electric charge.
The term was coined by Murray Gell-Mann in 1962 for being similar to an adhesive or glue that keeps the nucleus together. Together with the quarks, these particles were referred to as partons by Richard Feynman.
Properties
The gluon is a vector boson, which means it has a spin of 1. While massive spin-1 particles have three polarization states, massless gauge bosons like the gluon have only two polarization states because gauge invariance requires the field polarization to be transverse to the direction that the gluon is traveling. In quantum field theory, unbroken gauge invariance requires that gauge bosons have zero mass. Experiments limit the gluon's rest mass (if any) to less than a few MeV/c2. The gluon has negative intrinsic parity.
Counting gluons
There are eight independent types of gluons in QCD. This is unlike the photon of QED or the three W and Z bosons of the weak interaction.
Additionally, gluons are subject to the color charge phenomena. Quarks carry three types of color charge; antiquarks carry three types of anticolor. Gluons carry both color and anticolor. This gives nine possible combinations of color and anticolor in gluons. The following is a list of those combinations (and their schematic names):
red–antired red–antigreen red–antiblue
green–antired green–antigreen green–antiblue
blue–antired blue–antigreen blue–antiblue
These possible combinations are only effective states, not the actual observed color states of gluons. To understand how they are combined, it is necessary to consider the mathematics of color charge in more detail.
Color singlet states
The stable strongly interacting particles, including hadrons like the proton or the neutron, are observed to be "colorless". More precisely, they are in a "color singlet" state, and mathematically analogous to a spin singlet state. The states allow interaction with other color singlets, but not other color states; because long-range gluon interactions do not exist, this illustrates that gluons in the singlet state do not exist either.
The color singlet state is:
If one could measure the color of the state, there would be equal probabilities of it being red–antired, blue–antiblue, or green–antigreen.
Eight color states
There are eight remaining independent color states corresponding to the "eight types" or "eight colors" of gluons. Since the states can be mixed together, there are multiple ways of presenting these states. These are known as the "color octet", and a commonly used list for each is:
These are equivalent to the Gell-Mann matrices. The critical feature of these particular eight states is that they are linearly independent, and also independent of the singlet state, hence 32 − 1 or 23. There is no way to add any combination of these states to produce any others. It is also impossible to add them to make r, g, or b the forbidden singlet state. There are many other possible choices, but all are mathematically equivalent, at least equally complicated, and give the same physical results.
Group theory details
Formally, QCD is a gauge theory with SU(3) gauge symmetry. Quarks are introduced as spinors in Nf flavors, each in the fundamental representation (triplet, denoted 3) of the color gauge group, SU(3). The gluons are vectors in the adjoint representation (octets, denoted 8) of color SU(3). For a general gauge group, the number of force-carriers, like photons or gluons, is always equal to the dimension of the adjoint representation. For the simple case of SU(N), the dimension of this representation is .
In group theory, there are no color singlet gluons because quantum chromodynamics has an SU(3) rather than a U(3) symmetry. There is no known a priori reason for one group to be preferred over the other, but as discussed above, the experimental evidence supports SU(3). If the group were U(3), the ninth (colorless singlet) gluon would behave like a "second photon" and not like the other eight gluons.
Confinement
Since gluons themselves carry color charge, they participate in strong interactions. These gluon–gluon interactions constrain color fields to string-like objects called "flux tubes", which exert constant force when stretched. Due to this force, quarks are confined within composite particles called hadrons. This effectively limits the range of the strong interaction to meters, roughly the size of a nucleon. Beyond a certain distance, the energy of the flux tube binding two quarks increases linearly. At a large enough distance, it becomes energetically more favorable to pull a quark–antiquark pair out of the vacuum rather than increase the length of the flux tube.
One consequence of the hadron-confinement property of gluons is that they are not directly involved in the nuclear forces between hadrons. The force mediators for these are other hadrons called mesons.
Although in the normal phase of QCD single gluons may not travel freely, it is predicted that there exist hadrons that are formed entirely of gluons — called glueballs. There are also conjectures about other exotic hadrons in which real gluons (as opposed to virtual ones found in ordinary hadrons) would be primary constituents. Beyond the normal phase of QCD (at extreme temperatures and pressures), quark–gluon plasma forms. In such a plasma there are no hadrons; quarks and gluons become free particles.
Experimental observations
Quarks and gluons (colored) manifest themselves by fragmenting into more quarks and gluons, which in turn hadronize into normal (colorless) particles, correlated in jets. As revealed in 1978 summer conferences, the PLUTO detector at the electron-positron collider DORIS (DESY) produced the first evidence that the hadronic decays of the very narrow resonance Υ(9.46) could be interpreted as three-jet event topologies produced by three gluons. Later, published analyses by the same experiment confirmed this interpretation and also the spin = 1 nature of the gluon (see also the recollection and PLUTO experiments).
In summer 1979, at higher energies at the electron-positron collider PETRA (DESY), again three-jet topologies were observed, now clearly visible and interpreted as q gluon bremsstrahlung, by TASSO, MARK-J and PLUTO experiments (later in 1980 also by JADE). The spin = 1 property of the gluon was confirmed in 1980 by TASSO and PLUTO experiments (see also the review). In 1991 a subsequent experiment at the LEP storage ring at CERN again confirmed this result.
The gluons play an important role in the elementary strong interactions between quarks and gluons, described by QCD and studied particularly at the electron-proton collider HERA at DESY. The number and momentum distribution of the gluons in the proton (gluon density) have been measured by two experiments, H1 and ZEUS, in the years 1996–2007. The gluon contribution to the proton spin has been studied by the HERMES experiment at HERA. The gluon density in the proton (when behaving hadronically) also has been measured.
Color confinement is verified by the failure of free quark searches (searches of fractional charges). Quarks are normally produced in pairs (quark + antiquark) to compensate the quantum color and flavor numbers; however at Fermilab single production of top quarks has been shown. No glueball has been demonstrated.
Deconfinement was claimed in 2000 at CERN SPS in heavy-ion collisions, and it implies a new state of matter: quark–gluon plasma, less interactive than in the nucleus, almost as in a liquid. It was found at the Relativistic Heavy Ion Collider (RHIC) at Brookhaven in the years 2004–2010 by four contemporaneous experiments. A quark–gluon plasma state has been confirmed at the CERN Large Hadron Collider (LHC) by the three experiments ALICE, ATLAS and CMS in 2010.
Jefferson Lab's Continuous Electron Beam Accelerator Facility, in Newport News, Virginia, is one of 10 Department of Energy facilities doing research on gluons. The Virginia lab was competing with another facility – Brookhaven National Laboratory, on Long Island, New York – for funds to build a new electron-ion collider. In December 2019, the US Department of Energy selected the Brookhaven National Laboratory to host the electron-ion collider.
| Physical sciences | Bosons | null |
12673 | https://en.wikipedia.org/wiki/Galois%20group | Galois group | In mathematics, in the area of abstract algebra known as Galois theory, the Galois group of a certain type of field extension is a specific group associated with the field extension. The study of field extensions and their relationship to the polynomials that give rise to them via Galois groups is called Galois theory, so named in honor of Évariste Galois who first discovered them.
For a more elementary discussion of Galois groups in terms of permutation groups, see the article on Galois theory.
Definition
Suppose that is an extension of the field (written as and read "E over F). An automorphism of is defined to be an automorphism of that fixes pointwise. In other words, an automorphism of is an isomorphism such that for each . The set of all automorphisms of forms a group with the operation of function composition. This group is sometimes denoted by
If is a Galois extension, then is called the Galois group of , and is usually denoted by .
If is not a Galois extension, then the Galois group of is sometimes defined as , where is the Galois closure of .
Galois group of a polynomial
Another definition of the Galois group comes from the Galois group of a polynomial . If there is a field such that factors as a product of linear polynomials
over the field , then the Galois group of the polynomial is defined as the Galois group of where is minimal among all such fields.
Structure of Galois groups
Fundamental theorem of Galois theory
One of the important structure theorems from Galois theory comes from the fundamental theorem of Galois theory. This states that given a finite Galois extension , there is a bijection between the set of subfields and the subgroups Then, is given by the set of invariants of under the action of , so
Moreover, if is a normal subgroup then . And conversely, if is a normal field extension, then the associated subgroup in is a normal group.
Lattice structure
Suppose are Galois extensions of with Galois groups The field with Galois group has an injection which is an isomorphism whenever .
Inducting
As a corollary, this can be inducted finitely many times. Given Galois extensions where then there is an isomorphism of the corresponding Galois groups:
Examples
In the following examples is a field, and are the fields of complex, real, and rational numbers, respectively. The notation indicates the field extension obtained by adjoining an element to the field .
Computational tools
Cardinality of the Galois group and the degree of the field extension
One of the basic propositions required for completely determining the Galois groups of a finite field extension is the following: Given a polynomial , let be its splitting field extension. Then the order of the Galois group is equal to the degree of the field extension; that is,
Eisenstein's criterion
A useful tool for determining the Galois group of a polynomial comes from Eisenstein's criterion. If a polynomial factors into irreducible polynomials the Galois group of can be determined using the Galois groups of each since the Galois group of contains each of the Galois groups of the
Trivial group
is the trivial group that has a single element, namely the identity automorphism.
Another example of a Galois group which is trivial is Indeed, it can be shown that any automorphism of must preserve the ordering of the real numbers and hence must be the identity.
Consider the field The group contains only the identity automorphism. This is because is not a normal extension, since the other two cube roots of ,
and
are missing from the extension—in other words is not a splitting field.
Finite abelian groups
The Galois group has two elements, the identity automorphism and the complex conjugation automorphism.
Quadratic extensions
The degree two field extension has the Galois group with two elements, the identity automorphism and the automorphism which exchanges and . This example generalizes for a prime number
Product of quadratic extensions
Using the lattice structure of Galois groups, for non-equal prime numbers the Galois group of is
Cyclotomic extensions
Another useful class of examples comes from the splitting fields of cyclotomic polynomials. These are polynomials defined as
whose degree is , Euler's totient function at . Then, the splitting field over is and has automorphisms sending for relatively prime to . Since the degree of the field is equal to the degree of the polynomial, these automorphisms generate the Galois group. If then
If is a prime , then a corollary of this is
In fact, any finite abelian group can be found as the Galois group of some subfield of a cyclotomic field extension by the Kronecker–Weber theorem.
Finite fields
Another useful class of examples of Galois groups with finite abelian groups comes from finite fields. If is a prime power, and if and denote the Galois fields of order and respectively, then is cyclic of order and generated by the Frobenius homomorphism.
Degree 4 examples
The field extension is an example of a degree field extension. This has two automorphisms where and Since these two generators define a group of order , the Klein four-group, they determine the entire Galois group.
Another example is given from the splitting field of the polynomial
Note because the roots of are There are automorphisms
generating a group of order . Since generates this group, the Galois group is isomorphic to .
Finite non-abelian groups
Consider now where is a primitive cube root of unity. The group is isomorphic to , the dihedral group of order 6, and is in fact the splitting field of over
Quaternion group
The Quaternion group can be found as the Galois group of a field extension of . For example, the field extension
has the prescribed Galois group.
Symmetric group of prime order
If is an irreducible polynomial of prime degree with rational coefficients and exactly two non-real roots, then the Galois group of is the full symmetric group
For example, is irreducible from Eisenstein's criterion. Plotting the graph of with graphing software or paper shows it has three real roots, hence two complex roots, showing its Galois group is .
Comparing Galois groups of field extensions of global fields
Given a global field extension (such as ) and equivalence classes of valuations on (such as the -adic valuation) and on such that their completions give a Galois field extensionof local fields, there is an induced action of the Galois group on the set of equivalence classes of valuations such that the completions of the fields are compatible. This means if then there is an induced isomorphism of local fieldsSince we have taken the hypothesis that lies over (i.e. there is a Galois field extension ), the field morphism is in fact an isomorphism of -algebras. If we take the isotropy subgroup of for the valuation class then there is a surjection of the global Galois group to the local Galois group such that there is an isomorphism between the local Galois group and the isotropy subgroup. Diagrammatically, this meanswhere the vertical arrows are isomorphisms. This gives a technique for constructing Galois groups of local fields using global Galois groups.
Infinite groups
A basic example of a field extension with an infinite group of automorphisms is , since it contains every algebraic field extension . For example, the field extensions for a square-free element each have a unique degree automorphism, inducing an automorphism in
One of the most studied classes of infinite Galois group is the absolute Galois group, which is an infinite, profinite group defined as the inverse limit of all finite Galois extensions for a fixed field. The inverse limit is denoted
,
where is the separable closure of the field . Note this group is a topological group. Some basic examples include and
.
Another readily computable example comes from the field extension containing the square root of every positive prime. It has Galois group
,
which can be deduced from the profinite limit
and using the computation of the Galois groups.
Properties
The significance of an extension being Galois is that it obeys the fundamental theorem of Galois theory: the closed (with respect to the Krull topology) subgroups of the Galois group correspond to the intermediate fields of the field extension.
If is a Galois extension, then can be given a topology, called the Krull topology, that makes it into a profinite group.
| Mathematics | Abstract algebra | null |
12695 | https://en.wikipedia.org/wiki/Group%20representation | Group representation | In the mathematical field of representation theory, group representations describe abstract groups in terms of bijective linear transformations of a vector space to itself (i.e. vector space automorphisms); in particular, they can be used to represent group elements as invertible matrices so that the group operation can be represented by matrix multiplication.
In chemistry, a group representation can relate mathematical group elements to symmetric rotations and reflections of molecules.
Representations of groups allow many group-theoretic problems to be reduced to problems in linear algebra. In physics, they describe how the symmetry group of a physical system affects the solutions of equations describing that system.
The term representation of a group is also used in a more general sense to mean any "description" of a group as a group of transformations of some mathematical object. More formally, a "representation" means a homomorphism from the group to the automorphism group of an object. If the object is a vector space we have a linear representation. Some people use realization for the general notion and reserve the term representation for the special case of linear representations. The bulk of this article describes linear representation theory; see the last section for generalizations.
Branches of group representation theory
The representation theory of groups divides into subtheories depending on the kind of group being represented. The various theories are quite different in detail, though some basic definitions and concepts are similar. The most important divisions are:
Finite groups — Group representations are a very important tool in the study of finite groups. They also arise in the applications of finite group theory to crystallography and to geometry. If the field of scalars of the vector space has characteristic p, and if p divides the order of the group, then this is called modular representation theory; this special case has very different properties. See Representation theory of finite groups.
Compact groups or locally compact groups — Many of the results of finite group representation theory are proved by averaging over the group. These proofs can be carried over to infinite groups by replacement of the average with an integral, provided that an acceptable notion of integral can be defined. This can be done for locally compact groups, using the Haar measure. The resulting theory is a central part of harmonic analysis. The Pontryagin duality describes the theory for commutative groups, as a generalised Fourier transform. | Mathematics | Algebra | null |
12701 | https://en.wikipedia.org/wiki/Greenwich%20Mean%20Time | Greenwich Mean Time | Greenwich Mean Time (GMT) is the local mean time at the Royal Observatory in Greenwich, London, counted from midnight. At different times in the past, it has been calculated in different ways, including being calculated from noon; as a consequence, it cannot be used to specify a particular time unless a context is given. The term "GMT" is also used as one of the names for the time zone UTC+00:00 and, in UK law, is the basis for civil time in the United Kingdom.
Because of Earth's uneven angular velocity in its elliptical orbit and its axial tilt, noon (12:00:00) GMT is rarely the exact moment the Sun crosses the Greenwich Meridian and reaches its highest point in the sky there. This event may occur up to 16 minutes before or after noon GMT, a discrepancy described by the equation of time. Noon GMT is the annual average (the arithmetic mean) moment of this event, which accounts for the word "mean" in "Greenwich Mean Time".
Originally, astronomers considered a GMT day to start at noon, while for almost everyone else it started at midnight. To avoid confusion, the name Universal Time was introduced in 1928 to denote GMT as counted from midnight. Today, Universal Time usually refers to Coordinated Universal Time (UTC) or UT1; English speakers often use GMT as a synonym for UTC. For navigation, it is considered equivalent to UT1 (the modern form of mean solar time at 0° longitude); but this meaning can differ from UTC by up to 0.9s. The term "GMT" should thus not be used for purposes that require precision.
The term "GMT" is especially used by institutional bodies within the United Kingdom, such as the BBC World Service, the Royal Navy, and the Met Office; and others particularly in Arab countries, such as the Middle East Broadcasting Centre and OSN.
History
As the United Kingdom developed into an advanced maritime nation, British mariners kept at least one chronometer on GMT to calculate their longitude from the Greenwich meridian, which was considered to have longitude zero degrees, by a convention adopted in the International Meridian Conference of 1884. Synchronisation of the chronometer on GMT did not affect shipboard time, which was still solar time. But this practice, combined with mariners from other nations drawing from Nevil Maskelyne's method of lunar distances based on observations at Greenwich, led to GMT being used worldwide as a standard time independent of location. Most time zones were based upon GMT, as an offset of a number of hours (and occasionally half or quarter hours) "ahead of GMT" or "behind GMT".
Greenwich Mean Time was adopted across the island of Great Britain by the Railway Clearing House in 1847 and by almost all railway companies by the following year, from which the term railway time is derived. It was gradually adopted for other purposes, but a legal case in 1858 held "local mean time" to be the official time. On 14 May 1880, a letter signed by "Clerk to Justices" appeared in The Times, stating that "Greenwich time is now kept almost throughout England, but it appears that Greenwich time is not legal time. For example, our polling booths were opened, say, at 8 13 and closed at 4 13 p.m." This was changed later in 1880, when Greenwich Mean Time was legally adopted throughout the island of Great Britain. GMT was adopted in the Isle of Man in 1883, in Jersey in 1898 and in Guernsey in 1913. Ireland adopted GMT in 1916, supplanting Dublin Mean Time. Hourly time signals from Greenwich Observatory were first broadcast by shortwave radio on 5 February 1924 at 17:30:00 UTC, rendering the time ball at the observatory redundant.
The daily rotation of the Earth is irregular (see ΔT) and has a slowing trend; therefore atomic clocks constitute a much more stable timebase. On 1 January 1972, GMT as the international civil time standard was superseded by Coordinated Universal Time (UTC), maintained by an ensemble of atomic clocks around the world. Universal Time (UT), a term introduced in 1928, initially represented mean time at Greenwich determined in the traditional way to accord with the originally defined universal day; from 1 January 1956 (as decided by the International Astronomical Union in Dublin in 1955, at the initiative of William Markowitz) this "raw" form of UT was re-labelled UT0 and effectively superseded by refined forms UT1 (UT0 equalised for the effects of polar wandering) and UT2 (UT1 further equalised for annual seasonal variations in Earth rotation rate).
Ambiguity in the definition of GMT
Historically, GMT has been used with two different conventions for numbering hours. The long-standing astronomical convention, dating from the work of Ptolemy, was to refer to noon as zero hours (see Julian day). This contrasted with the civil convention of referring to midnight as zero hours dating from the Roman Empire. The latter convention was adopted on and after 1 January 1925 for astronomical purposes, resulting in a discontinuity of 12 hours, or half a day. The instant that was designated as "December 31.5 GMT" in 1924 almanacs became "January 1.0 GMT" in 1925 almanacs. The term Greenwich Mean Astronomical Time (GMAT) was introduced to unambiguously refer to the previous noon-based astronomical convention for GMT. The more specific terms UT and UTC do not share this ambiguity, always referring to midnight as zero hours.
GMT in legislation
United Kingdom
Legally, the civil time used in the UK is called "Greenwich mean time" (without capitalisation), with an exception made for those periods when the Summer Time Act 1972 orders an hour's shift for daylight saving. The Interpretation Act 1978, section 9, provides that whenever an expression of time occurs in any Act, the time referred to shall (unless otherwise specifically stated) be held to be Greenwich mean time. Under subsection 23, the same rule applies to deeds and other instruments.
During the experiment of 1968 to 1971, when the British Isles did not revert to Greenwich Mean Time during the winter, the all-year British Summer Time was called British Standard Time (BST).
In the UK, UTC+00:00 is disseminated to the general public in winter and UTC+01:00 in summer.
BBC radio stations broadcast the "six pips" of the Greenwich Time Signal. It is named from its original generation at the Royal Greenwich Observatory. If announced (such as near the start of summer time or of winter time), announcers on domestic channels declare the time as GMT or BST as appropriate. As the BBC World Service is broadcast to all time zones, the announcers use the term "Greenwich Mean Time" consistently throughout the year.
Other countries
Several countries define their local time by reference to Greenwich Mean Time. Some examples are:
Belgium: Decrees of 1946 and 1947 set legal time as one hour ahead of GMT.
Ireland: "Standard Time" () is defined as being one hour in advance of GMT. "Winter Time" () is defined as being the same as GMT.
Canada: Interpretation Act, R.S.C. 1985, c. I-21, section 35(1). This refers to "standard time" for the several provinces, defining each in relation to "Greenwich time", but does not use the expression "Greenwich mean time". Several provinces, such as Nova Scotia (Time Definition Act. R.S., c. 469, s. 1), have their own legislation which specifically mentions either "Greenwich Mean Time" or "Greenwich mean solar time".
Philippines: The term GMT is still in use when it comes to electronics such as cellular phones. Android phones use "UTC" but keypad phones use "GMT" to define any time zone around the world.
Time zone
Greenwich Mean Time is defined in law as standard time in the following countries and areas, which also advance their clocks one hour (GMT+1) in summer.
United Kingdom, where the summer time is called British Summer Time (BST)
Ireland, where it is called Winter Time, changing to Standard Time in summer.
Portugal (with the exception of the Azores)
Canary Islands
Faroe Islands
Greenwich Mean Time is used as standard time all year round in the following countries and areas:
Burkina Faso
The Gambia
Ghana
Guinea
Guinea-Bissau
Iceland
Ivory Coast
Liberia
Mali
Mauritania
Sahrawi Arab Democratic Republic (disputed)
Saint Helena, Ascension and Tristan da Cunha
Senegal
Sierra Leone
Togo
| Technology | Timekeeping | null |
12702 | https://en.wikipedia.org/wiki/GIF | GIF | The Graphics Interchange Format (GIF; or , ) is a bitmap image format that was developed by a team at the online services provider CompuServe led by American computer scientist Steve Wilhite and released on June 15, 1987.
The format can contain up to 8 bits per pixel, allowing a single image to reference its own palette of up to 256 different colors chosen from the 24-bit RGB color space. It can also represent multiple images in a file, which can be used for animations, and allows a separate palette of up to 256 colors for each frame. These palette limitations make GIF less suitable for reproducing color photographs and other images with color gradients but well-suited for simpler images such as graphics or logos with solid areas of color.
GIF images are compressed using the Lempel–Ziv–Welch (LZW) lossless data compression technique to reduce the file size without degrading the visual quality.
While once in widespread usage on the World Wide Web because of its wide implementation and portability between applications and operating systems, usage of the format has declined for space and quality reasons, often being replaced with video formats such as the MP4 file format. These replacements, in turn, are sometimes termed "GIFs" despite having no relation to the original file format.
History
CompuServe introduced GIF on 15 June 1987 to provide a color image format for their file downloading areas. This replaced their earlier run-length encoding format, which was black and white only. GIF became popular because it used Lempel–Ziv–Welch data compression. Since this was more efficient than the run-length encoding used by PCX and MacPaint, fairly large images could be downloaded reasonably quickly even with slow modems.
The original version of GIF was called 87a. This version already supported multiple images in a stream.
In 1989, CompuServe released an enhanced version, called 89a, This version added:
support for animation delays
transparent background colors
storage of application-specific metadata
allowing text labels as text (not embedding them in the graphical data). As there is little control over display fonts, however, this feature is rarely used.
The two versions can be distinguished by looking at the first six bytes of the file (the "magic number" or signature), which, when interpreted as ASCII, read "GIF87a" or "GIF89a", respectively.
CompuServe encouraged the adoption of GIF by providing downloadable conversion utilities for many computers. By December 1987, for example, an Apple IIGS user could view pictures created on an Atari ST or Commodore 64. GIF was one of the first two image formats commonly used on Web sites, the other being the black-and-white XBM.
In September 1995 Netscape Navigator 2.0 added the ability for animated GIFs to loop.
While GIF was developed by CompuServe, it used the Lempel–Ziv–Welch (LZW) lossless data compression algorithm patented by Unisys in 1985. Controversy over the licensing agreement between Unisys and CompuServe in 1994 spurred the development of the Portable Network Graphics (PNG) standard. In 2004, all patents relating to the proprietary compression used for GIF expired.
The feature of storing multiple images in one file, accompanied by control data, is used extensively on the Web to produce simple animations.
The optional interlacing feature, which stores image scan lines out of order in such a fashion that even a partially downloaded image was somewhat recognizable, also helped GIF's popularity, as a user could abort the download if it was not what was required.
In May 2015 Facebook added support for GIF. In January 2018 Instagram also added GIF stickers to the story mode.
In 2016 the Internet Archive released a searchable library of GIFs from their Geocities archive.
Terminology
As a noun, the word GIF is found in the newer editions of many dictionaries. In 2012, the American wing of the Oxford University Press recognized GIF as a verb as well, meaning "to create a GIF file", as in "GIFing was the perfect medium for sharing scenes from the Summer Olympics". The press's lexicographers voted it their word of the year, saying that GIFs have evolved into "a tool with serious applications including research and journalism".
Pronunciation
The pronunciation of the first letter of GIF has been disputed since the 1990s. The most common pronunciations in English are (with a soft g as in gin) and (with a hard g as in gift), differing in the phoneme represented by the letter G. The creators of the format pronounced the acronym GIF as , with a soft g, with Wilhite stating that he intended for the pronunciation to deliberately echo the American peanut butter brand Jif, and CompuServe employees would often quip "choosy developers choose GIF", a spoof of Jif's television commercials. However, the word is widely pronounced as , with a hard g, and polls have generally shown that this hard g pronunciation is more prevalent.
Dictionary.com cites both pronunciations, indicating as the primary pronunciation, while Cambridge Dictionary of American English offers only the hard-g pronunciation. Merriam-Webster's Collegiate Dictionary and Oxford Dictionaries cite both pronunciations, but place the hard g first: . The New Oxford American Dictionary gave only in its second edition but updated it to in the third edition.
The disagreement over the pronunciation has led to heated Internet debate. On the occasion of receiving a lifetime achievement award at the 2013 Webby Awards ceremony, Wilhite publicly rejected the hard-g pronunciation; his speech led to more than 17,000 posts on Twitter and dozens of news articles. The White House and the TV program Jeopardy! also entered the debate in 2013. In February 2020, The J.M. Smucker Company, the owners of the Jif brand, partnered with the animated image database and search engine Giphy to release a limited-edition "Jif vs. GIF" (hashtagged as #JIFvsGIF) jar of peanut butter that had a label humorously declaring the soft-g pronunciation to refer exclusively to the peanut butter, and GIF to be exclusively pronounced with the hard-g pronunciation.
Usage
GIFs are suitable for sharp-edged line art with a limited number of colors, such as logos. This takes advantage of the format's lossless compression, which favors flat areas of uniform color with well defined edges. They can also be used to store low-color sprite data for games. GIFs can be used for small animations and low-resolution video clips, or as reactions in online messaging used to convey emotion and feelings instead of using words. They are popular on social media platforms such as Tumblr, Facebook and Twitter.
File format
Conceptually, a GIF file describes a fixed-sized graphical area (the "logical screen") populated with zero or more "images". Many GIF files have a single image that fills the entire logical screen. Others divide the logical screen into separate sub-images. The images may also function as animation frames in an animated GIF file, but again these need not fill the entire logical screen.
GIF files start with a fixed-length header ("GIF87a" or "GIF89a") giving the version, followed by a fixed-length Logical Screen Descriptor giving the pixel dimensions and other characteristics of the logical screen. The screen descriptor may also specify the presence and size of a Global Color Table (GCT), which follows next if present.
Thereafter, the file is divided into segments of the following types, each introduced by a 1-byte sentinel:
An image (introduced by 0x2C, an ASCII comma )
An extension block (introduced by 0x21, an ASCII exclamation point )
The trailer (a single byte of value 0x3B, an ASCII semicolon ), which should be the last byte of the file.
An image starts with a fixed-length Image Descriptor, which may specify the presence and size of a Local Color Table (which follows next if present). The image data follows: one byte giving the bit width of the unencoded symbols (which must be at least 2 bits wide, even for bi-color images), followed by a series of sub-blocks containing the LZW-encoded data.
Extension blocks (blocks that "extend" the 87a definition via a mechanism already defined in the 87a spec) consist of the sentinel, an additional byte specifying the type of extension, and a series of sub-blocks with the extension data. Extension blocks that modify an image (like the Graphic Control Extension that specifies the optional animation delay time and optional transparent background color) must immediately precede the segment with the image they refer to.
Each sub-block begins with a byte giving the number of subsequent data bytes in the sub-block (1 to 255). The series of sub-blocks is terminated by an empty sub-block (a 0 byte).
This structure allows the file to be parsed even if not all parts are understood. A GIF marked 87a may contain extension blocks; the intent is that a decoder can read and display the file without the features covered in extensions it does not understand.
The full detail of the file format is covered in the GIF specification.
Palettes
GIF is palette-based: the colors used in an image (a frame) in the file have their RGB values defined in a palette table that can hold up to 256 entries, and the data for the image refer to the colors by their indices (0–255) in the palette table. The color definitions in the palette can be drawn from a color space of millions of shades (224 shades, 8 bits for each primary), but the maximum number of colors a frame can use is 256. This limitation was reasonable when GIF was developed because hardware that could display more than 256 colors simultaneously was rare. Simple graphics, line drawings, cartoons, and grey-scale photographs typically need fewer than 256 colors.
Each frame can designate one index as a "transparent background color": any pixel assigned this index takes on the color of the pixel in the same position from the background, which may have been determined by a previous frame of animation.
Many techniques, collectively called dithering, have been developed to approximate a wider range of colors with a small color palette by using pixels of two or more colors to approximate in-between colors. These techniques sacrifice spatial resolution to approximate deeper color resolution. While not part of the GIF specification, dithering can be used in images subsequently encoded as GIF images. This is often not an ideal solution for GIF images, both because the loss of spatial resolution typically makes an image look fuzzy on the screen, and because the dithering patterns often interfere with the compressibility of the image data, working against GIF's main purpose.
In the early days of graphical web browsers, graphics cards with 8-bit buffers (allowing only 256 colors) were common and it was fairly common to make GIF images using the websafe palette. This ensured predictable display, but severely limited the choice of colors. When 24-bit color became the norm, palettes could instead be populated with the optimum colors for individual images.
A small color table may suffice for small images, and keeping the color table small allows the file to be downloaded faster. Both the 87a and 89a specifications allow color tables of 2n colors for any n from 1 through 8. Most graphics applications will read and display GIF images with any of these table sizes; but some do not support all sizes when creating images. Tables of 2, 16, and 256 colors are widely supported.
True color
Although GIF is almost never used for true color images, it is possible to do so. A GIF image can include multiple image blocks, each of which can have its own 256-color palette, and the blocks can be tiled to create a complete image. Alternatively, the GIF89a specification introduced the idea of a "transparent" color where each image block can include its own palette of 255 visible colors plus one transparent color. A complete image can be created by layering image blocks with the visible portion of each layer showing through the transparent portions of the layers above.
To render a full-color image as a GIF, the original image must be broken down into smaller regions having no more than 255 or 256 different colors. Each of these regions is then stored as a separate image block with its own local palette and when the image blocks are displayed together (either by tiling or by layering partially transparent image blocks), the complete, full-color image appears. For example, breaking an image into tiles of 16 by 16 pixels (256 pixels in total) ensures that no tile has more than the local palette limit of 256 colors, although larger tiles may be used and similar colors merged resulting in some loss of color information.
Since each image block can have its own local color table, a GIF file having many image blocks can be very large, limiting the usefulness of full-color GIFs. Additionally, not all GIF rendering programs handle tiled or layered images correctly. Many rendering programs interpret tiles or layers as animation frames and display them in sequence as an animation with most web browsers automatically displaying the frames with a delay time of 0.1 seconds or more.
Example GIF file
The hex numbers in the following tables are in little-endian byte order, as the format specification prescribes.
Image coding
The image pixel data, scanned horizontally from top left, are converted by LZW encoding to codes that are then mapped into bytes for storing in the file. The pixel codes typically don't match the 8-bit size of the bytes, so the codes are packed into bytes by a "little-Endian" scheme: the least significant bit of the first code is stored in the least significant bit of the first byte, higher order bits of the code into higher order bits of the byte, spilling over into the low order bits of the next byte as necessary. Each subsequent code is stored starting at the least significant bit not already used.
This byte stream is stored in the file as a series of "sub-blocks". Each sub-block has a maximum length 255 bytes and is prefixed with a byte indicating the number of data bytes in the sub-block. The series of sub-blocks is terminated by an empty sub-block (a single 0 byte, indicating a sub-block with 0 data bytes).
For the sample image above the reversible mapping between 9-bit codes and bytes is shown below.
A slight compression is evident: pixel colors defined initially by 15 bytes are exactly represented by 12 code bytes including control codes.
The encoding process that produces the 9-bit codes is shown below. A local string accumulates pixel color numbers from the palette, with no output action as long as the local string can be found in a code table. There is special treatment of the first two pixels that arrive before the table grows from its initial size by additions of strings. After each output code, the local string is initialized to the latest pixel color (that could not be included in the output code).
Table 9-bit
string --> code code Action
#0 | 000h Initialize root table of 9-bit codes
palette | :
colors | :
#255 | 0FFh
clr | 100h
end | 101h
| 100h Clear
Pixel Local |
color Palette string |
BLACK #40 28 | 028h 1st pixel always to output
WHITE #255 FF | String found in table
28 FF | 102h Always add 1st string to table
FF | Initialize local string
WHITE #255 FF FF | String not found in table
| 0FFh - output code for previous string
FF FF | 103h - add latest string to table
FF | - initialize local string
WHITE #255 FF FF | String found in table
BLACK #40 FF FF 28 | String not found in table
| 103h - output code for previous string
FF FF 28 | 104h - add latest string to table
28 | - initialize local string
WHITE #255 28 FF | String found in table
WHITE #255 28 FF FF | String not found in table
| 102h - output code for previous string
28 FF FF | 105h - add latest string to table
FF | - initialize local string
WHITE #255 FF FF | String found in table
WHITE #255 FF FF FF | String not found in table
| 103h - output code for previous string
FF FF FF | 106h - add latest string to table
FF | - initialize local string
WHITE #255 FF FF | String found in table
WHITE #255 FF FF FF | String found in table
WHITE #255 FF FF FF FF | String not found in table
| 106h - output code for previous string
FF FF FF FF| 107h - add latest string to table
FF | - initialize local string
WHITE #255 FF FF | String found in table
WHITE #255 FF FF FF | String found in table
WHITE #255 FF FF FF FF | String found in table
No more pixels
107h - output code for last string
101h End
For clarity the table is shown above as being built of strings of increasing length. That scheme can function but the table consumes an unpredictable amount of memory. Memory can be saved in practice by noting that each new string to be stored consists of a previously stored string augmented by one character. It is economical to store at each address only two words: an existing address and one character.
The LZW algorithm requires a search of the table for each pixel. A linear search through up to 4096 addresses would make the coding slow. In practice the codes can be stored in order of numerical value; this allows each search to be done by a SAR (Successive Approximation Register, as used in some ADCs), with only 12 magnitude comparisons. For this efficiency an extra table is needed to convert between codes and actual memory addresses; the extra table upkeeping is needed only when a new code is stored which happens at much less than pixel rate.
Image decoding
Decoding begins by mapping the stored bytes back to 9-bit codes. These are decoded to recover the pixel colors as shown below. A table identical to the one used in the encoder is built by adding strings by this rule:
shift
9-bit ----> Local Table Pixel
code code code --> string Palette color Action
100h 000h | #0 Initialize root table of 9-bit codes
: | palette
: | colors
0FFh | #255
100h | clr
101h | end
028h | #40 Decode 1st pixel
0FFh 028h | Incoming code found in table
| #255 - output string from table
102h | 28 FF - add to table
103h 0FFh | Incoming code not found in table
103h | FF FF - add to table
| - output string from table
| #255
| #255
102h 103h | Incoming code found in table
| - output string from table
| #40
| #255
104h | FF FF 28 - add to table
103h 102h | Incoming code found in table
| - output string from table
| #255
| #255
105h | 28 FF FF - add to table
106h 103h | Incoming code not found in table
106h | FF FF FF - add to table
| - output string from table
| #255
| #255
| #255
107h 106h | Incoming code not found in table
107h | FF FF FF FF - add to table
| - output string from table
| #255
| #255
| #255
| #255
101h | End
LZW code lengths
Shorter code lengths can be used for palettes smaller than the 256 colors in the example. If the palette is only 64 colors (so color indexes are 6 bits wide), the symbols can range from 0 to 63, and the symbol width can be taken to be 6 bits, with codes starting at 7 bits. In fact, the symbol width need not match the palette size: as long as the values decoded are always less than the number of colors in the palette, the symbols can be any width from 2 to 8, and the palette size any power of 2 from 2 to 256. For example, if only the first four colors (values 0 to 3) of the palette are used, the symbols can be taken to be 2 bits wide with codes starting at 3 bits.
Conversely, the symbol width could be set at 8, even if only values 0 and 1 are used; these data would only require a two-color table. Although there would be no point in encoding the file that way, something similar typically happens for bi-color images: the minimum symbol width is 2, even if only values 0 and 1 are used.
The code table initially contains codes that are one bit longer than the symbol size in order to accommodate the two special codes clr and end and codes for strings that are added during the process. When the table is full the code length increases to give space for more strings, up to a maximum code 4095 = FFF(hex). As the decoder builds its table it tracks these increases in code length and it is able to unpack incoming bytes accordingly.
Uncompressed GIF
The GIF encoding process can be modified to create a file without LZW compression that is still viewable as a GIF image. This technique was introduced originally as a way to avoid patent infringement. Uncompressed GIF can also be a useful intermediate format for a graphics programmer because individual pixels are accessible for reading or painting. An uncompressed GIF file can be converted to an ordinary GIF file simply by passing it through an image editor.
The modified encoding method ignores building the LZW table and emits only the root palette codes and the codes for CLEAR and STOP. This yields a simpler encoding (a 1-to-1 correspondence between code values and palette codes) but sacrifices all of the compression: each pixel in the image generates an output code indicating its color index. When processing an uncompressed GIF, a standard GIF decoder will not be prevented from writing strings to its dictionary table, but the code width must never increase since that triggers a different packing of bits to bytes.
If the symbol width is , the codes of width fall naturally into two blocks: the lower block of codes for coding single symbols, and the upper block of codes that will be used by the decoder for sequences of length greater than one. Of that upper block, the first two codes are already taken: for CLEAR and for STOP. The decoder must also be prevented from using the last code in the upper block, , because when the decoder fills that slot, it will increase the code width. Thus in the upper block there are codes available to the decoder that won't trigger an increase in code width. Because the decoder is always one step behind in maintaining the table, it does not generate a table entry upon receiving the first code from the encoder, but will generate one for each succeeding code. Thus the encoder can generate codes without triggering an increase in code width. Therefore, the encoder must emit extra CLEAR codes at intervals of codes or less to make the decoder reset the coding dictionary. The GIF standard allows such extra CLEAR codes to be inserted in the image data at any time. The composite data stream is partitioned into sub-blocks that each carry from 1 to 255 bytes.
For the sample 3×5 image above, the following 9-bit codes represent "clear" (100) followed by image pixels in scan order and "stop" (101).
100 028 0FF 0FF 0FF 028 0FF 0FF 0FF 0FF 0FF 0FF 0FF 0FF 0FF 0FF 101
After the above codes are mapped to bytes, the uncompressed file differs from the compressed file thus:
Compression example
The trivial example of a large image of solid color demonstrates the variable-length LZW compression used in GIF files.
The code values shown are packed into bytes which are then packed into blocks of up to 255 bytes. A block of image data begins with a byte that declares the number of bytes to follow. The last block of data for an image is marked by a zero block-length byte.
Interlacing
The GIF Specification allows each image within the logical screen of a GIF file to specify that it is interlaced; i.e., that the order of the raster lines in its data block is not sequential. This allows a partial display of the image that can be recognized before the full image is painted.
An interlaced image is divided from top to bottom into strips 8 pixels high, and the rows of the image are presented in the following order:
Pass 1: Line 0 (the top-most line) from each strip.
Pass 2: Line 4 from each strip.
Pass 3: Lines 2 and 6 from each strip.
Pass 4: Lines 1, 3, 5, and 7 from each strip.
The pixels within each line are not interlaced, but presented consecutively from left to right. As with non-interlaced images, there is no break between the data for one line and the data for the next. The indicator that an image is interlaced is a bit set in the corresponding Image Descriptor block.
Animated GIF
Although GIF was not designed as an animation medium, its ability to store multiple images in one file naturally suggested using the format to store the frames of an animation sequence. To facilitate displaying animations, the GIF89a spec added the Graphic Control Extension (GCE), which allows the images (frames) in the file to be painted with time delays, forming a video clip. Each frame in an animation GIF is introduced by its own GCE specifying the time delay to wait after the frame is drawn. Global information at the start of the file applies by default to all frames. The data is stream-oriented, so the file offset of the start of each GCE depends on the length of preceding data. Within each frame the LZW-coded image data is arranged in sub-blocks of up to 255 bytes; the size of each sub-block is declared by the byte that precedes it.
By default, an animation displays the sequence of frames only once, stopping when the last frame is displayed. To enable an animation to loop, Netscape in the 1990s used the Application Extension block (intended to allow vendors to add application-specific information to the GIF file) to implement the Netscape Application Block (NAB). This block, placed immediately before the sequence of animation frames, specifies the number of times the sequence of frames should be played (1 to 65535 times) or that it should repeat continuously (zero indicates loop forever). Support for these repeating animations first appeared in Netscape Navigator version 2.0, and then spread to other browsers. Most browsers now recognize and support NAB, though it is not strictly part of the GIF89a specification.
The following example shows the structure of the animation file Rotating earth (large).gif shown (as a thumbnail) in the article's infobox.
The animation delay for each frame is specified in the GCE in hundredths of a second. Some economy of data is possible where a frame need only rewrite a portion of the pixels of the display, because the Image Descriptor can define a smaller rectangle to be rescanned instead of the whole image. Browsers or other displays that do not support animated GIFs typically show only the first frame.
The size and color quality of animated GIF files can vary significantly depending on the application used to create them. Strategies for minimizing file size include using a common global color table for all frames (rather than a complete local color table for each frame) and minimizing the number of pixels covered in successive frames (so that only the pixels that change from one frame to the next are included in the latter frame). More advanced techniques involve modifying color sequences to better match the existing LZW dictionary, a form of lossy compression. Simply packing a series of independent frame images into a composite animation tends to yield large file sizes. Tools are available to minimize the file size given an existing GIF.
Metadata
Metadata can be stored in GIF files as a comment block, a plain text block, or an application-specific application extension block. Several graphics editors use unofficial application extension blocks to include the data used to generate the image, so that it can be recovered for further editing.
All of these methods technically require the metadata to be broken into sub-blocks so that applications can navigate the metadata block without knowing its internal structure.
The Extensible Metadata Platform (XMP) metadata standard introduced an unofficial but now widespread "XMP Data" application extension block for including XMP data in GIF files. Since the XMP data is encoded using UTF-8 without NUL characters, there are no 0 bytes in the data. Rather than break the data into formal sub-blocks, the extension block terminates with a "magic trailer" that routes any application treating the data as sub-blocks to a final 0 byte that terminates the sub-block chain.
Unisys and LZW patent enforcement
In 1977 and 1978, Jacob Ziv and Abraham Lempel published a pair of papers on a new class of lossless data-compression algorithms, now collectively referred to as LZ77 and LZ78. In 1983, Terry Welch developed a fast variant of LZ78 which was named Lempel–Ziv–Welch (LZW).
Welch filed a patent application for the LZW method in June 1983. The resulting patent, US4558302, granted in December 1985, was assigned to Sperry Corporation who subsequently merged with Burroughs Corporation in 1986 and formed Unisys. Further patents were obtained in the United Kingdom, France, Germany, Italy, Japan and Canada.
In addition to the above patents, Welch's 1983 patent also includes citations to several other patents that influenced it, including:
two 1980 Japanese patents from NEC's Jun Kanatsu,
(1974) from John S. Hoerning,
(1977) from Klaus E. Holtz, and
a 1981 German patent from Karl Eckhart Heinz.
In June 1984, an article by Welch was published in the IEEE magazine which publicly described the LZW technique for the first time. LZW became a popular data compression technique and, when the patent was granted, Unisys entered into licensing agreements with over a hundred companies.
The popularity of LZW led CompuServe to choose it as the compression technique for their version of GIF, developed in 1987. At the time, CompuServe was not aware of the patent. Unisys became aware that the version of GIF used the LZW compression technique and entered into licensing negotiations with CompuServe in January 1993. The subsequent agreement was announced on 24 December 1994. Unisys stated that they expected all major commercial on-line information services companies employing the LZW patent to license the technology from Unisys at a reasonable rate, but that they would not require licensing, or fees to be paid, for non-commercial, non-profit GIF-based applications, including those for use on the on-line services.
Following this announcement, there was widespread condemnation of CompuServe and Unisys, and many software developers threatened to stop using GIF. The PNG format (see below) was developed in 1995 as an intended replacement. However, obtaining support from the makers of Web browsers and other software for the PNG format proved difficult and it was not possible to replace GIF, although PNG has gradually increased in popularity. Therefore, GIF variations without LZW compression were developed. For instance the libungif library, based on Eric S. Raymond's giflib, allows creation of GIFs that followed the data format but avoided the compression features, thus avoiding use of the Unisys LZW patent. A 2001 Dr. Dobb's article described a way to achieve LZW-compatible encoding without infringing on its patents.
In August 1999, Unisys changed the details of their licensing practice, announcing the option for owners of certain non-commercial and private websites to obtain licenses on payment of a one-time license fee of $5000 or $7500. Such licenses were not required for website owners or other GIF users who had used licensed software to generate GIFs. Nevertheless, Unisys was subjected to thousands of online attacks and abusive emails from users believing that they were going to be charged $5000 or sued for using GIFs on their websites. Despite giving free licenses to hundreds of non-profit organizations, schools and governments, Unisys was completely unable to generate any good publicity and continued to be condemned by individuals and organizations such as the League for Programming Freedom who started the "Burn All GIFs" campaign in 1999.
The United States LZW patent expired on 20 June 2003. The counterpart patents in the United Kingdom, France, Germany and Italy expired on 18 June 2004, the Japanese patents expired on 20 June 2004, and the Canadian patent expired on 7 July 2004. Consequently, while Unisys has further patents and patent applications relating to improvements to the LZW technique, LZW itself (and consequently GIF) have been free to use since July 2004.
Alternatives
PNG
Portable Network Graphics (PNG) was designed as a replacement for GIF in order to avoid infringement of Unisys' patent on the LZW compression technique. PNG offers better compression and more features than GIF, animation being the only significant exception. PNG is more suitable than GIF in instances where true-color imaging and alpha transparency are required.
Although support for PNG format came slowly, new web browsers support PNG. Older versions of Internet Explorer do not support all features of PNG. Versions 6 and earlier do not support alpha channel transparency without using Microsoft-specific HTML extensions. Gamma correction of PNG images was not supported before version 8, and the display of these images in earlier versions may have the wrong tint.
For identical 8-bit (or lower) image data, PNG files are typically smaller than the equivalent GIFs, due to the more efficient compression techniques used in PNG encoding. Complete support for GIF is complicated chiefly by the complex canvas structure it allows, though this is what enables the compact animation features.
Animation formats
Videos resolve many issues that GIFs present through common usage on the web. They include drastically smaller file sizes, the ability to surpass the 8-bit color restriction, and better frame-handling and compression through inter-frame coding. Virtually universal support for the GIF format in web browsers and a lack of official support for video in the HTML standard caused GIF to rise to prominence for the purpose of displaying short video-like files on the web.
MNG ("Multiple-image Network Graphics") was originally developed as a PNG-based solution for animations. MNG reached version 1.0 in 2001, but few applications support it.
APNG ("Animated Portable Network Graphics") was proposed by Mozilla in 2006. APNG is an extension to the PNG format as alternative to the MNG format. APNG is supported by most browsers as of 2019. APNG provides the ability to animate PNG files, while retaining backwards compatibility in decoders that cannot understand the animation chunk (unlike MNG). Older decoders will simply render the first frame of the animation.
The PNG group officially rejected APNG as an official extension on 20 April 2007.
There have been several subsequent proposals for a simple animated graphics format based on PNG using several different approaches. Nevertheless, APNG is still under development by Mozilla and is supported in Firefox 3.0 while MNG support was dropped. APNG is currently supported by all major web browsers including Chrome (since version 59.0), Opera, Firefox and Edge.
Embedded Adobe Flash objects and MPEG files were used on some websites to display simple video, but required the use of an additional browser plugin.
WebM and WebP are in development and are supported by some web browsers.
Other options for web animation include serving individual frames using AJAX, or animating SVG ("Scalable vector graphics") images using JavaScript or SMIL ("Synchronized Multimedia Integration Language").
With the introduction of widespread support of the HTML video (<video>) tag in most web browsers, some websites use a looped version of the video tag generated by JavaScript functions. This gives the appearance of a GIF, but with the size and speed advantages of compressed video.
Notable examples are Gfycat and Imgur and their GIFV metaformat, which is really a video tag playing a looped MP4 or WebM compressed video.
HEIF ("High Efficiency Image File Format") is an image file format, finalized in 2015, which uses a discrete cosine transform (DCT) lossy compression algorithm based on the HEVC video format, and related to the JPEG image format. In contrast to JPEG, HEIF supports animation.
Compared to the GIF format, which lacks DCT compression, HEIF allows significantly more efficient compression. HEIF stores more information and produces higher-quality animated images at a small fraction of an equivalent GIF's size.
VP9 only supports alpha compositing with 4:2:0 chroma subsampling, which may be unsuitable for GIFs that combine transparency with rasterised vector graphics with fine color details.
AV1 video codec or AVIF can also be used either as a video or a sequenced image.
Uses
In April 2014, 4chan added support for silent WebM videos that are under 3 MB in size and 2 min in length, and in October 2014, Imgur started converting any GIF files uploaded to the site to H.264 video and giving the link to the HTML player the appearance of an actual file with a .gifv extension.
In January 2016, Telegram started re-encoding all GIFs to MPEG-4 videos that "require up to 95% less disk space for the same image quality."
| Technology | File formats | null |
12713 | https://en.wikipedia.org/wiki/Giant%20panda | Giant panda | The giant panda (Ailuropoda melanoleuca), also known as the panda bear or simply panda, is a bear species endemic to China. It is characterised by its white coat with black patches around the eyes, ears, legs and shoulders. Its body is rotund; adult individuals weigh and are typically long. It is sexually dimorphic, with males being typically 10 to 20% larger than females. A thumb is visible on its forepaw, which helps in holding bamboo in place for feeding. It has large molar teeth and expanded temporal fossa to meet its dietary requirements. It can digest starch and is mostly herbivorous with a diet consisting almost entirely of bamboo and bamboo shoots.
The giant panda lives exclusively in six montane regions in a few Chinese provinces at elevations of up to . It is solitary and gathers only in mating seasons. It relies on olfactory communication to communicate and uses scent marks as chemical cues and on landmarks like rocks or trees. Females rear cubs for an average of 18 to 24 months. The oldest known giant panda was 38 years old.
As a result of farming, deforestation and infrastructural development, the giant panda has been driven out of the lowland areas where it once lived. The wild population has increased again to 1,864 individuals as of March 2015. Since 2016, it has been listed as Vulnerable on the IUCN Red List. In July 2021, Chinese authorities also classified the giant panda as vulnerable. It is a conservation-reliant species. By 2007, the captive population comprised 239 giant pandas in China and another 27 outside the country. It has often served as China's national symbol, appeared on Chinese Gold Panda coins since 1982 and as one of the five Fuwa mascots of the 2008 Summer Olympics held in Beijing.
Etymology
The word panda was borrowed into English from French, but no conclusive explanation of the origin of the French word panda has been found. The closest candidate is the Nepali word ponya, possibly referring to the adapted wrist bone of the red panda, which is native to Nepal. In many older sources, the name "panda" or "common panda" refers to the red panda (Ailurus fulgens), which was described some 40 years earlier and over that period was the only animal known as a panda. The binomial name Ailuropoda melanoleuca means black and white (melanoleuca) cat-foot (ailuropoda).
Since the earliest collection of Chinese writings, the Chinese language has given the bear many different names, including mò (, ancient Chinese name for giant panda), huāxióng (; "spotted bear") and zhúxióng (; "bamboo bear"). The most popular names in China today are dàxióngmāo (; ), or simply xióngmāo (; ). As with the word panda in English, xióngmāo () was originally used to describe just the red panda, but dàxióngmāo () and xiǎoxióngmāo (; ) were coined to differentiate between the species.
In Taiwan, another popular name for panda is the inverted dàmāoxióng (; ), though many encyclopedias and dictionaries in Taiwan still use the "bear cat" form as the correct name. Some linguists argue, in this construction, "bear" instead of "cat" is the base noun, making the name more grammatically and logically correct, which have led to the popular choice despite official writings. This name did not gain its popularity until 1988, when a private zoo in Tainan painted a sun bear black and white and created the Tainan fake panda incident.
Taxonomy
For many decades, the precise taxonomic classification of the giant panda was under debate because it shares characteristics with both bears and raccoons. In 1985, molecular studies indicated that the giant panda is a true bear, part of the family Ursidae. These studies show it diverged about from the common ancestor of the Ursidae; it is the most basal member of this family and equidistant from all other extant bear species.
Subspecies
Two subspecies of giant panda have been recognized on the basis of distinct cranial measurements, colour patterns, and population genetics.
The nominate subspecies, A. m. melanoleuca, consists of most extant populations of the giant panda. These animals are principally found in Sichuan and display the typical stark black and white contrasting colours.
The Qinling panda, A. m. qinlingensis, is restricted to the Qinling Mountains in Shaanxi at elevations of . The typical black and white pattern of Sichuan giant pandas is replaced with a light brown and white pattern. The skull of A. m. qinlingensis is smaller than its relatives, and it has larger molars.
A detailed study of the giant panda's genetic history from 2012 confirms that the separation of the Qinling population occurred about 300,000 years ago, and reveals that the non-Qinling population further diverged into two groups, named the Minshan and the Qionglai-Daxiangling-Xiaoxiangling-Liangshan group respectively, about 2,800 years ago.
Phylogeny
Of the eight extant species in the bear family Ursidae, the giant panda's lineage branched off the earliest.
Distribution and habitat
The giant panda is endemic to China. It is found in small, fragmented populations in six mountainous regions in the country, mainly in Sichuan, and also in neighbouring Shaanxi and Gansu. Successful habitat preservation has seen a rise in panda numbers, though loss of habitat due to human activities remains its biggest threat. In areas with a high concentration of medium-to-large-sized mammalssuch as domestic cattle, a species known to degrade the landscapethe giant panda population is generally low. This is mainly attributed to the panda's avoidance of interspecific competition.
The species has been located at elevations of above sea level. They frequent habitats with a healthy concentration of bamboos, typically old-growth forests, but may also venture into secondary forest habitats. The Daxiangling Mountain population inhabits both coniferous and broadleaf forests. Additionally, the Qinling population often selects evergreen broadleaf and conifer forests, while pandas in the Qionglai mountainous region exclusively select upland conifer forests. The remaining two populations, namely those occurring in the Liangshan and Xiaoxiangling mountains, predominantly occur in broadleaf evergreen and conifer forests.
Giant pandas once roamed across Southeast Asia from Myanmar to northern Vietnam. Their range in China spanned much of the southeast region. By the Pleistocene, climate change affected panda populations, and the subsequent domination of modern humans led to large-scale habitat loss. In 2001, it was estimated that the range of the giant panda had declined by about 99% of its range in earlier millenniums.
Description
The giant panda has a body shape typical of bears. It has black fur on its ears, limbs, shoulders and around the eyes. The rest of the animal's coat is white. The bear's distinctive coloration appears to serve as camouflage in both winter and summer environments as they do not hibernate. The white areas serve as camouflage in snow, while the black shoulders and legs conceal them in shade. Studies in the wild have found that when viewed from a distance, the panda displays disruptive coloration, while up close, they rely more on blending in. The black ears may be used to display aggression, while the eye patches might facilitate them identifying one another. The giant panda's thick, woolly coat keeps it warm in the cool forests of its habitat.
The panda's skull shape is typical of durophagous carnivorans. It has evolved from previous ancestors to exhibit larger molars with increased complexity and expanded temporal fossa. A study revealed that a giant panda had a bite force of 1298.9 Newton (BFQ 151.4) at canine teeth and 1815.9 Newton (BFQ 141.8) at carnassial teeth.
Adults measure around long, including a tail of about , and tall at the shoulder. Males can weigh up to . Females are generally 10–20% smaller than males. They weigh between and . The average weight for adults is .
The giant panda's paw has a digit similar to a thumb and five fingers; the thumb-like digit – actually a modified sesamoid bone – helps it to hold bamboo while eating. The giant panda's tail, measuring , is the second-longest in the bear family, behind the sloth bear.
Ecology
Diet
Despite its taxonomic classification as a carnivoran, the giant panda's diet is primarily herbivorous, with approximately 99% of its diet consisting of bamboo. However, the giant panda still has the digestive system of a carnivore, as well as carnivore-specific genes, and thus derives little energy and little protein from the consumption of bamboo. The ability to break down cellulose and lignin is very weak, and their main source of nutrients comes from starch and hemicelluloses. The most important part of their bamboo diet is the shoots, that are rich in starch and have up to 32% protein content. Accordingly, pandas have evolved a higher capability to digest starches than strict carnivores. Raw bamboo is toxic, containing cyanide compounds. Pandas' body tissues are less able than herbivores to detoxify cyanide, but their gut microbiomes are significantly enriched in putative genes coding for enzymes related to cyanide degradation, suggesting that they have cyanide-digesting gut microbes. It has been estimated that an adult panda absorbs of cyanide a day through its diet. To prevent poisoning, they have evolved anti-toxic mechanisms to protect themselves. About 80% of the cyanide is metabolized to less toxic thiocyanate and discharged in urine, while the remaining 20% is detoxified by other minor pathways.
During the shoot season (AprilAugust), pandas store a large amount of food in preparation for the months succeeding this seasonal period, in which pandas live off a diet of bamboo leaves. The giant panda is a highly specialised animal with unique adaptations, and has lived in bamboo forests for millions of years.
The average giant panda eats as much as of bamboo shoots a day to compensate for the limited energy content of its diet. Ingestion of such a large quantity of material is possible and necessary because of the rapid passage of large amounts of indigestible plant material through the short, straight digestive tract. It is also noted, however, that such rapid passage of digesta limits the potential of microbial digestion in the gastrointestinal tract, limiting alternative forms of digestion. Given this voluminous diet, the giant panda defecates up to 40 times a day. The limited energy input imposed on it by its diet has affected the panda's behavior. The giant panda tends to limit its social interactions and avoids steeply sloping terrain to limit its energy expenditures.
Two of the panda's most distinctive features, its large size and round face, are adaptations to its bamboo diet. Anthropologist Russell Ciochon observed: "[much] like the vegetarian gorilla, the low body surface area to body volume [of the giant panda] is indicative of a lower metabolic rate. This lower metabolic rate and a more sedentary lifestyle allows the giant panda to subsist on nutrient poor resources such as bamboo." The giant panda's round face is the result of powerful jaw muscles, which attach from the top of the head to the jaw. Large molars crush and grind fibrous plant material.
The morphological characteristics of extinct relatives of the giant panda suggest that while the ancient giant panda was omnivorous 7 million years ago (mya), it only became herbivorous some 2–2.4 mya with the emergence of A. microta. Genome sequencing of the giant panda suggests that the dietary switch could have initiated from the loss of the sole umami taste receptor, encoded by the genes TAS1R1 and TAS1R3 (also known as T1R1 and T1R3), resulting from two frameshift mutations within the T1R1 exons. Umami taste corresponds to high levels of glutamate as found in meat and may have thus altered the food choice of the giant panda. Although the pseudogenisation (conversion into a pseudogene) of the umami taste receptor in Ailuropoda coincides with the dietary switch to herbivory, it is likely a result of, and not the reason for, the dietary change. The mutation time for the T1R1 gene in the giant panda is estimated to 4.2 mya while fossil evidence indicates bamboo consumption in the giant panda species at least 7 mya, signifying that although complete herbivory occurred around 2 mya, the dietary switch was initiated prior to T1R1 loss-of-function.
Pandas eat any of 25 bamboo species in the wild, with the most common including Fargesia dracocephala and Fargesia rufa. Only a few bamboo species are widespread at the high altitudes pandas now inhabit. Bamboo leaves contain the highest protein levels; stems have less. Because of the synchronous flowering, death, and regeneration of all bamboo within a species, the giant panda must have at least two different species available in its range to avoid starvation. While primarily herbivorous, the giant panda still retains decidedly ursine teeth and will eat meat, fish, and eggs when available. In captivity, zoos typically maintain the giant panda's bamboo diet, though some will provide specially formulated biscuits or other dietary supplements.
Pandas will travel between different habitats if they need to, so they can get the nutrients that they need and to balance their diet for reproduction.
Interspecific interactions
Although adult giant pandas have few natural predators other than humans, young cubs are vulnerable to attacks by snow leopards, yellow-throated martens, eagles, feral dogs, and the Asian black bear. Sub-adults weighing up to may be vulnerable to predation by leopards.
Giant pandas are sympatric with other large mammals and bamboo feeders, such as the takin (Budorcas taxicolor). The takin and giant panda share a similar ecological niche, and they consume the same resources. When competition for food is fierce, pandas disperse to the outskirts of takin distribution. Other possible competitors include but is not limited to, the Eurasian wild pig (Sus scrofa), Chinese goral (Naemorhedus griseus) and the Asian black bear (Ursus thibetanus). Giant pandas avoid areas with a mid-to-high density of livestock, as they depress the vegetation. The Tibetan Plateau is the only known area where both giant and red pandas can be found. Although sharing near-identical ecological niches, competition between the two species has rarely been observed. Nearly 50% of their respective distribution overlaps, and successful coexistence is achieved through distinct habitat selection.
Pathogens and parasites
A captive female died from toxoplasmosis, a disease caused by an obligate intracellular parasitic protozoan known as Toxoplasma gondii that infects most warm-blooded animals, including humans. They are likely susceptible to diseases from Baylisascaris schroederi, a parasitic nematode known to infect giant panda intestines. This nematode species is known to give pandas baylisascariasi, a deadly disease that kills more wild pandas than any other cause. Additionally, the population is threatened by canine distemper virus (CDV), canine parvovirus, rotavirus, canine adenovirus, and canine coronavirus. Bacteria, such as Clostridium welchii, Proteus mirabilis, Klebsiella pneumoniae, and Escherichia coli, may also be lethal.
Behavior
The giant panda is a terrestrial animal and primarily spends its life roaming and feeding in the bamboo forests of the Qinling Mountains and in the hilly province of Sichuan. Giant pandas are generally solitary. Each adult has a defined territory and a female is not tolerant of other females in her range. Social encounters occur primarily during the brief breeding season in which pandas in proximity to one another will gather. After mating, the male leaves the female alone to raise the cub. Pandas were thought to fall into the crepuscular category, those who are active twice a day, at dawn and dusk; however, pandas may belong to a category all of their own, with activity peaks in the morning, afternoon and midnight. The low nutrition quality of bamboo means pandas need to eat more frequently, and due to their lack of major predators they can be active at any time of the day. Activity is highest in June and decreases in late summer to autumn with an increase from November through the following March. Activity is also directly related to the amount of sunlight during colder days. There is a significant interaction of solar radiation, such that solar radiation has a stronger positive effect on activity levels of panda bears.
Pandas communicate through vocalisation and scent marking such as clawing trees or spraying urine. They are able to climb and take shelter in hollow trees or rock crevices, but do not establish permanent dens. For this reason, pandas do not hibernate, which is similar to other subtropical mammals, and will instead move to elevations with warmer temperatures. Pandas rely primarily on spatial memory rather than visual memory. Though the panda is often assumed to be docile, it has been known to attack humans on rare occasions. Pandas have been known to cover themselves in horse manure to protect themselves against cold temperatures.
The species communicates foremost through a blatting sound; they achieve peaceful interactions through the emission of this sound. When in oestrus, a female emits a chirp. In hostile confrontations or during fights, the giant panda emits vocalizations such as a roar or growl. On the other hand, squeals typically indicate inferiority and submission in a dispute. Other vocalizations include honks and moans.
Olfactory communication
Giant pandas heavily rely on olfactory communication to communicate with one another. Scent marks are used to spread these chemical cues and are placed on landmarks like rocks or trees. Chemical communication in giant pandas plays many roles in their social situations. Scent marks and odors are used to spread information about sexual status, whether a female is in estrus or not, age, gender, individuality, dominance over territory, and choice of settlement. Giant pandas communicate by excreting volatile compounds, or scent marks, through the anogenital gland. Giant pandas have unique positions in which they will scent mark. Males deposit scent marks or urine by lifting their hind leg, rubbing their backside, or standing in order to rub the anogenital gland onto a landmark. Females, however, exercise squatting or simply rubbing their genitals onto a landmark.
The season plays a major role in mediating chemical communication. Depending on the season, mainly whether it is breeding season or not, may influence which odors are prioritized. Chemical signals can have different functions in different seasons. During the non-breeding season, females prefer the odors of other females because reproduction is not their primary motivation. However, during breeding season, odors from the opposite sex will be more attractive. Because they are solitary mammals and their breeding season is so brief, female pandas secrete chemical cues in order to let males know their sexual status. The chemical cues female pandas secrete can be considered to be pheromones for sexual reproduction. Females deposit scent marks through their urine which induces an increase in androgen levels in males. Androgen is a sex hormone found in both males and females; testosterone is the major androgen produced by males. Civetone and decanoic acid are chemicals found in female urine which promote behavioral responses in males; both chemicals are considered giant panda pheromones. Male pandas also secrete chemical signals that include information about their sexual reproductivity and age, which is beneficial for a female when choosing a mate. For example, age can be useful for a female to determine sexual maturity and sperm quality. Pandas are also able to determine when the signal was placed, further aiding in the quest to find a potential mate. However, chemical cues are not just used for communication between males and females, pandas can determine individuality from chemical signals. This allows them to be able to differentiate between a potential partner or someone of the same sex, which could be a potential competitor.
Chemical cues, or odors, play an important role in how a panda chooses their habitat. Pandas look for odors that tell them not only the identity of another panda, but if they should avoid them or not. Pandas tend to avoid their species for most of the year, breeding season being the brief time of major interaction. Chemical signaling allows for avoidance and competition. Pandas whose habitats are in similar locations will collectively leave scent marks in a unique location which is termed "scent stations". When pandas come across these scent stations, they are able to identify a specific panda and the scope of their habitat. This allows pandas to be able to pursue a potential mate or avoid a potential competitor.
Pandas can assess an individual's dominance status, including their age and size, via odor cues and may choose to avoid a scent mark if the signaler's competitive ability outweighs their own. A panda's size can be conveyed through the height of the scent mark. Since larger animals can place higher scent marks, an elevated scent mark advertises a higher competitive ability. Age must also be taken into consideration when assessing a competitor's fighting ability. For example, a mature panda will be larger than a younger, immature panda and possess an advantage during a fight.
Reproduction
Giant pandas reach sexual maturity between the ages of four and eight, and may be reproductive until age 20. The mating season is between March and May, when a female goes into estrus, which lasts for two or three days and only occurs once a year. When mating, the female is in a crouching, head-down position as the male mounts her from behind. Copulation time ranges from 30 seconds to five minutes, but the male may mount her repeatedly to ensure successful fertilisation. The gestation period is somewhere between 95 and 160 days - the variability is due to the fact that the fertilized egg may linger in the reproductive system for a while before implanting on the uterine wall. Giant pandas give birth to twins in about half of pregnancies. If twins are born, usually only one survives in the wild. The mother will select the stronger of the cubs, and the weaker cub will die due to starvation. The mother is thought to be unable to produce enough milk for two cubs since she does not store fat. The father has no part in helping raise the cub.
When the cub is first born, it is pink, blind, and toothless, weighing only , or about of the mother's weight, proportionally the smallest baby of any placental mammal. It nurses from its mother's breast six to 14 times a day for up to 30 minutes at a time. For three to four hours, the mother may leave the den to feed, which leaves the cub defenseless. One to two weeks after birth, the cub's skin turns grey where its hair will eventually become black. Slight pink colour may appear on the cub's fur, as a result of a chemical reaction between the fur and its mother's saliva. A month after birth, the colour pattern of the cub's fur is fully developed. Its fur is very soft and coarsens with age. The cub begins to crawl at 75 to 80 days; mothers play with their cubs by rolling and wrestling with them. The cubs can eat small quantities of bamboo after six months, though mother's milk remains the primary food source for most of the first year. Giant panda cubs weigh at one year and live with their mothers until they are 18 months to two years old. The interval between births in the wild is generally two years.
Initially, the primary method of breeding giant pandas in captivity was by artificial insemination, as they seemed to lose their interest in mating once they were captured. This led some scientists to trying methods such as showing them videos of giant pandas mating and giving the males sildenafil (commonly known as Viagra). In the 2000s, researchers started having success with captive breeding programs, and they have now determined giant pandas have comparable breeding to some populations of the American black bear, a thriving bear species.
In July 2009, Chinese scientists confirmed the birth of the first cub to be successfully conceived through artificial insemination using frozen sperm. The technique for freezing the sperm in liquid nitrogen was first developed in 1980 and the first birth was hailed as a solution to the dwindling availability of giant panda semen, which had led to inbreeding. Panda semen, which can be frozen for decades, could be shared between different zoos to save the species. As of 2009, it is expected that zoos in destinations such as San Diego in the United States and Mexico City will be able to provide their own semen to inseminate more giant pandas.
Attempts have also been made to reproduce giant pandas by interspecific pregnancy where cloned panda embryos were implanted into the uterus of an animal of another species. This has resulted in panda fetuses, but no live births.
Human interaction
Early references
In Ancient China, people thought pandas to be rare and noble creatures – the Empress Dowager Bo was buried with a panda skull in her vault. The grandson of Emperor Taizong of Tang is said to have given Japan two pandas and a sheet of panda skin as a sign of goodwill. Unlike many other animals in Ancient China, pandas were rarely thought to have medical uses. The few known uses include the Sichuan tribal peoples' use of panda urine to melt accidentally swallowed needles, and the use of panda pelts to control menstruation as described in the Qin dynasty encyclopedia Erya.
The creature named mo (貘) mentioned in some ancient books has been interpreted as giant panda. The dictionary Shuowen Jiezi (Eastern Han Dynasty) says that the mo, from Shu (Sichuan), is bear-like, but yellow-and-black, although the older Erya describes mo simply as a "white leopard". The interpretation of the legendary fierce creature pixiu (貔貅) as referring to the giant panda is also common.
During the reign of the Yongle Emperor (early 15th century), his relative from Kaifeng sent him a captured zouyu (騶虞), and another zouyu was sighted in Shandong. Zouyu is a legendary "righteous" animal, which, similarly to a qilin, only appears during the rule of a benevolent and sincere monarch.
In captivity
Pandas have been kept in zoos as early as the Western Han Dynasty in China, where the writer Sima Xiangru noted that the panda was the most treasured animal in the emperor's garden of exotic animals in the capital Chang'an (present Xi'an). Not until the 1950s were pandas again recorded to have been exhibited in China's zoos. Chi Chi at the London Zoo became very popular. This influenced the World Wildlife Fund to use a panda as its symbol. A 2006 New York Times article outlined the economics of keeping pandas, which costs five times more than keeping the next most expensive animal, an elephant. American zoos generally pay the Chinese government $1 million a year in fees, as part of a typical ten-year contract. San Diego's contract with China was to expire in 2008, but got a five-year extension at about half of the previous yearly cost. The last contract, with the Memphis Zoo in Memphis, Tennessee, ended in 2013.
In the 1970s, gifts of giant pandas to American and Japanese zoos formed an important part of the diplomacy of the People's Republic of China (PRC), as it marked some of the first cultural exchanges between China and the West. This practice has been termed "panda diplomacy". By 1984, however, pandas were no longer given as gifts. Instead, China began to offer pandas to other nations only on 10-year loans for a fee of up to US$1,000,000 per year and with the provision that any cubs born during the loan are the property of China. As a result of this change in policy, nearly all the pandas in the world are owned by China, and pandas leased to foreign zoos and all cubs are eventually returned to China. As of 2022, Xin Xin at the Chapultepec Zoo in Mexico City, was the last living descendant of the gifted pandas.
Since 1998, because of a WWF lawsuit, the United States Fish and Wildlife Service only allows US zoos to import a panda if the zoo can ensure China channels more than half of its loan fee into conservation efforts for giant pandas and their habitat. In May 2005, China offered a breeding pair to Taiwan. The issue became embroiled in cross-Strait relations – due to both the underlying symbolism and technical issues such as whether the transfer would be considered "domestic" or "international" or whether any true conservation purpose would be served by the exchange. A contest in 2006 to name the pandas was held in the mainland, resulting in the politically charged names Tuan Tuan and Yuan Yuan (from , implying reunification). China's offer was initially rejected by Chen Shui-bian, then President of Taiwan. However, when Ma Ying-jeou assumed the presidency in 2008, the offer was accepted and the pandas arrived in December of that year.
In the 2020s, certain "celebrity pandas" have gained a cult following amongst internet users, with dedicated fan accounts existing to keep tabs on the animals. Known as "giant panda fever" or "panda-monium", individual pandas are known to get billions of views and engagements on social media, as well as product lines specifically emulating them. At Chengdu Research Base of Giant Panda Breeding, certain of these "celebrity pandas" are known to garner hours-long lines specifically to see them.
Conservation
The giant panda is a vulnerable species, threatened by continued habitat loss and fragmentation, and by a very low birthrate, both in the wild and in captivity. Its range is confined to a small portion on the western edge of its historical range, which stretched through southern and eastern China, northern Myanmar, and northern Vietnam. The species is scattered into more than 30 subpopulations of relatively few animals. Building of roads and human settlement near panda habitat, result in population declines. Diseases from domesticated pets and livestock is another threat. By 2100, it is estimated that the distribution of giant pandas will shrink by up to 100%, mainly due to the effects of climate change. The giant panda is listed on CITES Appendix I, meaning trade of their parts is prohibited and that they require this protection to avoid extinction. They have been protected and placed in category 1, by the 1988 Wildlife Protection Act.
The giant panda has been a target of poaching by locals since ancient times and by foreigners since it was introduced to the West. Starting in the 1930s, foreigners were unable to poach giant pandas in China because of the Second Sino-Japanese War and the Chinese Civil War, but pandas remained a source of soft furs for the locals. The population boom in China after 1949 created stress on the pandas' habitat and the subsequent famines led to the increased hunting of wildlife, including pandas. After the Chinese economic reform, demand for panda skins from Hong Kong and Japan led to illegal poaching for the black market, acts generally ignored by the local officials at the time. In 1963, the PRC government set up Wolong National Nature Reserve to save the declining panda population.
The giant panda is among the world's most adored and protected rare animals, and is one of the few in the world whose natural inhabitant status was able to gain a UNESCO World Heritage Site designation. The Sichuan Giant Panda Sanctuaries, located in the southwest province of Sichuan and covering seven natural reserves, were inscribed onto the World Heritage List in 2006. A 2015 paper found that the giant panda can serve as an umbrella species as the preservation of their habitat also helps other endemic species in China, including 70% of the country's forest birds, 70% of mammals and 31% of amphibians.
In 2012, Earthwatch Institute, a global nonprofit that teams volunteers with scientists to conduct important environmental research, launched a program called "On the Trail of Giant Panda". This program, based in the Wolong National Nature Reserve, allows volunteers to work up close with pandas cared for in captivity, and help them adapt to life in the wild, so that they may breed, and live longer and healthier lives. Efforts to preserve the panda bear populations in China have come at the expense of other animals in the region, including snow leopards, wolves, and dholes. In order to improve living and mating conditions for the fragmented populations of pandas, nearly 70 natural reserves have been combined to form the Giant Panda National Park in 2020. With a size of 10,500 square miles, the park is roughly three times as large as Yellowstone National Park and incorporates the Wolong National Nature Reserve. Small, isolated populations run the risk of inbreeding and smaller genetic variety makes the individuals more vulnerable to various defects and genetic mutation.
Population
In 2006, scientists reported that the number of pandas living in the wild may have been underestimated at about 1,000. Previous population surveys had used conventional methods to estimate the size of the wild panda population, but using a new method that analyzes DNA from panda droppings, scientists believed the wild population were as large as 3,000. In 2006, there were 40 panda reserves in China, compared to just 13 reserves in 1998. As the species has been reclassified from "endangered" to "vulnerable" since 2016, the conservation efforts are thought to be working. Furthermore, in response to this reclassification, the State Forestry Administration of China announced that they would not accordingly lower the conservation level for panda, and would instead reinforce the conservation efforts.
In 2020, the panda population of the new national park was already above 1,800 individuals, which is roughly 80 percent of the entire panda population in China. Establishing the new protected area in the Sichuan Province also gives various other endangered or threatened species, like the Siberian tiger, the possibility to improve their living conditions by offering them a habitat. Other species who benefit from the protection of their habitat include the snow leopard, the golden snub-nosed monkey, the red panda and the complex-toothed flying squirrel.
In July 2021, Chinese conservation authorities announced that giant pandas are no longer endangered in the wild following years of conservation efforts, with a population in the wild exceeding 1,800. China has received international praise for its conservation of the species, which has also helped the country establish itself as a leader in endangered species conservation.
| Biology and health sciences | Carnivora | null |
12717 | https://en.wikipedia.org/wiki/Giraffe | Giraffe | The giraffe is a large African hoofed mammal belonging to the genus Giraffa. It is the tallest living terrestrial animal and the largest ruminant on Earth. Traditionally, giraffes have been thought of as one species, Giraffa camelopardalis, with nine subspecies. Most recently, researchers proposed dividing them into four extant species due to new research into their mitochondrial and nuclear DNA, and individual species can be distinguished by their fur coat patterns. Seven other extinct species of Giraffa are known from the fossil record.
The giraffe's distinguishing characteristics are its extremely long neck and legs, horn-like ossicones, and spotted coat patterns. It is classified under the family Giraffidae, along with its closest extant relative, the okapi. Its scattered range extends from Chad in the north to South Africa in the south and from Niger in the west to Somalia in the east. Giraffes usually inhabit savannahs and woodlands. Their food source is leaves, fruits, and flowers of woody plants, primarily acacia species, which they browse at heights most other ground-based herbivores cannot reach.
Lions, leopards, spotted hyenas, and African wild dogs may prey upon giraffes. Giraffes live in herds of related females and their offspring or bachelor herds of unrelated adult males but are gregarious and may gather in large groups. Males establish social hierarchies through "necking", combat bouts where the neck is used as a weapon. Dominant males gain mating access to females, which bear sole responsibility for rearing the young.
The giraffe has intrigued various ancient and modern cultures for its peculiar appearance and has often been featured in paintings, books, and cartoons. It is classified by the International Union for Conservation of Nature (IUCN) as vulnerable to extinction. It has been extirpated from many parts of its former range. Giraffes are still found in many national parks and game reserves, but estimates as of 2016 indicate there are approximately 97,500 members of Giraffa in the wild. More than 1,600 were kept in zoos in 2010.
Etymology
The name "giraffe" has its earliest known origins in the Arabic word (), of an ultimately unclear Sub-Saharan African language origin. The Middle English and early Modern English spellings, and , derive from the Arabic form-based Spanish and Portuguese girafa. The modern English form developed around 1600 from the French .
"Camelopard" () is an archaic English name for the giraffe; it derives from the Ancient Greek (), from (), "camel", and (), "leopard", referring to its camel-like shape and leopard-like colouration.
Taxonomy
Evolution
The giraffe is one of only two living genera of the family Giraffidae in the order Artiodactyla, the other being the okapi. They are ruminants of the clade Pecora, along with Antilocapridae (pronghorns), Cervidae (deer), Bovidae (cattle, antelope, goats and sheep) and Moschidae (musk deer). A 2019 genome study (cladogram below) finds that Giraffidae are a sister taxon to Antilocapridae, with an estimated split of over 20 million years ago.
The family Giraffidae was once much more extensive, with over 10 fossil genera described. The elongation of the neck appears to have started early in the giraffe lineage. Comparisons between giraffes and their ancient relatives suggest vertebrae close to the skull lengthened earlier, followed by lengthening of vertebrae further down. One early giraffid ancestor was Canthumeryx, which has been dated variously to have lived , 17–15 mya or 18–14.3 mya and whose deposits have been found in Libya. This animal resembled an antelope and had a medium-sized, lightly built body. Giraffokeryx appeared 15–12 mya on the Indian subcontinent and resembled an okapi or a small giraffe, and had a longer neck and similar ossicones. Giraffokeryx may have shared a clade with more massively built giraffids like Sivatherium and Bramatherium.
Giraffids like Palaeotragus, Shansitherium and Samotherium appeared 14 mya and lived throughout Africa and Eurasia. These animals had broader skulls with reduced frontal cavities. Paleotragus resembled the okapi and may have been its ancestor. Others find that the okapi lineage diverged earlier, before Giraffokeryx. Samotherium was a particularly important transitional fossil in the giraffe lineage, as the length and structure of its cervical vertebrae were between those of a modern giraffe and an okapi, and its neck posture was likely similar to the former's. Bohlinia, which first appeared in southeastern Europe and lived 9–7 mya, was likely a direct ancestor of the giraffe. Bohlinia closely resembled modern giraffes, having a long neck and legs and similar ossicones and dentition.
Bohlinia colonised China and northern India and produced the Giraffa, which, around , reached Africa. Climate changes led to the extinction of the Asian giraffes, while the African giraffes survived and radiated into new species. Living giraffes appear to have arisen around in eastern Africa during the Pleistocene. Some biologists suggest the modern giraffes descended from G. jumae; others find G. gracilis a more likely candidate. G. jumae was larger and more robust, while G. gracilis was smaller and more slender.
The changes from extensive forests to more open habitats, which began 8 mya, are believed to be the main driver for the evolution of giraffes. During this time, tropical plants disappeared and were replaced by arid C4 plants, and a dry savannah emerged across eastern and northern Africa and western India. Some researchers have hypothesised that this new habitat, coupled with a different diet, including acacia species, may have exposed giraffe ancestors to toxins that caused higher mutation rates and a higher rate of evolution. The coat patterns of modern giraffes may also have coincided with these habitat changes. Asian giraffes are hypothesised to have had more okapi-like colourations.
The giraffe genome is around 2.9 billion base pairs in length, compared to the 3.3 billion base pairs of the okapi. Of the proteins in giraffe and okapi genes, 19.4% are identical. The divergence of giraffe and okapi lineages dates to around 11.5 mya. A small group of regulatory genes in the giraffe appears responsible for the animal's height and associated circulatory adaptations.
Species and subspecies
The International Union for Conservation of Nature (IUCN) currently recognises only one species of giraffe with nine subspecies.
Carl Linnaeus originally classified living giraffes as one species in 1758. He gave it the binomial name Cervus camelopardalis. Mathurin Jacques Brisson coined the generic name Giraffa in 1762. During the 1900s, various taxonomies with two or three species were proposed. A 2007 study on the genetics of giraffes using mitochondrial DNA suggested at least six lineages could be recognised as species. A 2011 study using detailed analyses of the morphology of giraffes, and application of the phylogenetic species concept, described eight species of living giraffes. A 2016 study also concluded that living giraffes consist of multiple species. The researchers suggested the existence of four species, which have not exchanged genetic information between each other for 1 to 2 million years.
A 2020 study showed that depending on the method chosen, different taxonomic hypotheses recognizing from two to six species can be considered for the genus Giraffa. That study also found that multi-species coalescent methods can lead to taxonomic over-splitting, as those methods delimit geographic structures rather than species. The three-species hypothesis, which recognises G. camelopardalis, G. giraffa, and G. tippelskirchi, is highly supported by phylogenetic analyses and also corroborated by most population genetic and multi-species coalescent analyses. A 2021 whole genome sequencing study suggests the existence of four distinct species and seven subspecies, which was supported by a 2024 study of cranial morphology. A 2024 study found a higher amount of ancient gene flow than expected between populations.
The cladogram below shows the phylogenetic relationship between the four proposed species and seven subspecies based on a 2021 genome analysis. The eight lineages correspond to eight traditional subspecies in the one-species hypothesis. The Rothschild giraffe is subsumed into G. camelopardalis camelopardalis.
The following table compares the different hypotheses for giraffe species. The description column shows the traditional nine subspecies in the one-species hypothesis.
The first extinct species to be described was Giraffa sivalensis Falconer and Cautley 1843, a reevaluation of a vertebra that was initially described as a fossil of the living giraffe. While taxonomic opinion may be lacking on some names, the extinct species that have been published include:
Giraffa gracilis
Giraffa jumae
Giraffa pomeli
Giraffa priscilla
Giraffa punjabiensis
Giraffa pygmaea
Giraffa sivalensis
Giraffa stillei
Anatomy
Fully grown giraffes stand tall, with males taller than females. The average weight is for an adult male and for an adult female. Despite its long neck and legs, its body is relatively short. The skin is mostly gray or tan, and can reach a thickness of . The long tail ends in a long, dark tuft of hair and is used as a defense against insects.
The coat has dark blotches or patches, which can be orange, chestnut, brown, or nearly black, surrounded by light hair, usually white or cream coloured. Male giraffes become darker as they grow old. The coat pattern has been claimed to serve as camouflage in the light and shade patterns of savannah woodlands. When standing among trees and bushes, they are hard to see at even a few metres distance. However, adult giraffes move about to gain the best view of an approaching predator, relying on their size and ability to defend themselves rather than on camouflage, which may be more important for calves. Each giraffe has a unique coat pattern. Calves inherit some coat pattern traits from their mothers, and variation in some spot traits is correlated with calf survival. The skin under the blotches may regulate the animal's body temperature, being sites for complex blood vessel systems and large sweat glands. Spotless or solid-color giraffes are very rare, but have been observed.
The fur may give the animal chemical defense, as its parasite repellents give it a characteristic scent. At least 11 main aromatic chemicals are in the fur, although indole and 3-methylindole are responsible for most of the smell. Because males have a stronger odour than females, it may also have a sexual function.
Head
Both sexes have prominent horn-like structures called ossicones, which can reach . They are formed from ossified cartilage, covered in skin, and fused to the skull at the parietal bones. Being vascularised, the ossicones may have a role in thermoregulation, and are used in combat between males. Appearance is a reliable guide to the sex or age of a giraffe: the ossicones of females and young are thin and display tufts of hair on top, whereas those of adult males tend to be bald and knobbed on top. A lump, which is more prominent in males, emerges in the middle of the skull. Males develop calcium deposits that form bumps on their skulls as they age. Multiple sinuses lighten a giraffe's skull. However, as males age, their skulls become heavier and more club-like, helping them become more dominant in combat. The occipital condyles at the bottom of the skull allow the animal to tip its head over 90 degrees and grab food on the branches directly above them with the tongue.
With eyes located on the sides of the head, the giraffe has a broad visual field from its great height. Compared to other ungulates, giraffe vision is more binocular and the eyes are larger with a greater retinal surface area. Giraffes may see in colour, and their senses of hearing and smell are sharp. The ears are movable. The nostrils are slit-shaped, possibly to withstand blowing sand. The giraffe's tongue is about long. It is black, perhaps to protect against sunburn, and can grasp foliage and delicately pick off leaves. The upper lip is flexible and hairy to protect against sharp prickles. The upper jaw has a hard palate instead of front teeth. The molars and premolars are wide with low crowns on the surface.
Neck
The giraffe has an extremely elongated neck, which can be up to in length. Along the neck is a mane made of short, erect hairs. The neck typically rests at an angle of 50–60 degrees, though juveniles are closer to 70 degrees. The long neck results from a disproportionate lengthening of the cervical vertebrae, not from the addition of more vertebrae. Each cervical vertebra is over long. They comprise 52–54 per cent of the length of the giraffe's vertebral column, compared with the 27–33 percent typical of similar large ungulates, including the giraffe's closest living relative, the okapi. This elongation largely takes place after birth, perhaps because giraffe mothers would have a difficult time giving birth to young with the same neck proportions as adults. The giraffe's head and neck are held up by large muscles and a nuchal ligament, which are anchored by long thoracic vertebrae spines, giving them a hump.
The giraffe's neck vertebrae have ball and socket joints. The point of articulation between the cervical and thoracic vertebrae of giraffes is shifted to lie between the first and second thoracic vertebrae (T1 and T2), unlike in most other ruminants, where the articulation is between the seventh cervical vertebra (C7) and T1. This allows C7 to contribute directly to increased neck length and has given rise to the suggestion that T1 is actually C8, and that giraffes have added an extra cervical vertebra. However, this proposition is not generally accepted, as T1 has other morphological features, such as an articulating rib, deemed diagnostic of thoracic vertebrae, and because exceptions to the mammalian limit of seven cervical vertebrae are generally characterised by increased neurological anomalies and maladies.
There are several hypotheses regarding the evolutionary origin and maintenance of elongation in giraffe necks. Charles Darwin originally suggested the "competing browsers hypothesis", which has been challenged only recently. It suggests that competitive pressure from smaller browsers, like kudu, steenbok and impala, encouraged the elongation of the neck, as it enabled giraffes to reach food that competitors could not. This advantage is real, as giraffes can and do feed up to high, while even quite large competitors, such as kudu, can feed up to only about high. There is also research suggesting that browsing competition is intense at lower levels, and giraffes feed more efficiently (gaining more leaf biomass with each mouthful) high in the canopy. However, scientists disagree about just how much time giraffes spend feeding at levels beyond the reach of other browsers, and a 2010 study found that adult giraffes with longer necks actually suffered higher mortality rates under drought conditions than their shorter-necked counterparts. This study suggests that maintaining a longer neck requires more nutrients, which puts longer-necked giraffes at risk during a food shortage.
Another theory, the sexual selection hypothesis, proposes that long necks evolved as a secondary sexual characteristic, giving males an advantage in "necking" contests (see below) to establish dominance and obtain access to sexually receptive females. In support of this theory, some studies have stated that necks are longer and heavier for males than females of the same age, and that males do not employ other forms of combat. However, a 2024 study found that, while males have thicker necks, females actually have proportionally longer ones, which is likely because of their greater need to find more food to sustain themselves and their dependent young. It has also been proposed that the neck serves to give the animal greater vigilance.
Legs, locomotion and posture
The front legs tend to be longer than the hind legs, and males have proportionally longer front legs than females, which gives them better support when swinging their necks during fights. The leg bones lack first, second and fifth metapodials. It appears that a suspensory ligament allows the lanky legs to support the animal's great weight. The hooves of large male giraffes reach in diameter. The fetlock of the leg is low to the ground, allowing the hoof to better support the animal's weight. Giraffes lack dewclaws and interdigital glands. While the pelvis is relatively short, the ilium has stretched-out crests.
A giraffe has only two gaits: walking and galloping. Walking is done by moving the legs on one side of the body, then doing the same on the other side. When galloping, the hind legs move around the front legs before the latter move forward, and the tail will curl up. The movements of the head and neck provide balance and control momentum while galloping. The giraffe can reach a sprint speed of up to , and can sustain for several kilometres. Giraffes would probably not be competent swimmers as their long legs would be highly cumbersome in the water, although they might be able to float. When swimming, the thorax would be weighed down by the front legs, making it difficult for the animal to move its neck and legs in harmony or keep its head above the water's surface.
A giraffe rests by lying with its body on top of its folded legs. To lie down, the animal kneels on its front legs and then lowers the rest of its body. To get back up, it first gets on its front knees and positions its backside on top of its hindlegs. It then pulls the backside upwards, and the front legs stand straight up again. At each stage, the animal swings its head for balance. If the giraffe wants to reach down to drink, it either spreads its front legs or bends its knees. Studies in captivity found the giraffe sleeps intermittently around 4.6 hours per day, mostly at night. It usually sleeps lying down; however, standing sleeps have been recorded, particularly in older individuals. Intermittent short "deep sleep" phases while lying are characterised by the giraffe bending its neck backwards and resting its head on the hip or thigh, a position believed to indicate paradoxical sleep.
Internal systems
In mammals, the left recurrent laryngeal nerve is longer than the right; in the giraffe, it is over longer. These nerves are longer in the giraffe than in any other living animal; the left nerve is over long. Each nerve cell in this path begins in the brainstem and passes down the neck along the vagus nerve, then branches off into the recurrent laryngeal nerve which passes back up the neck to the larynx. Thus, these nerve cells have a length of nearly in the largest giraffes. Despite its long neck and large skull, the brain of the giraffe is typical for an ungulate. Evaporative heat loss in the nasal passages keep the giraffe's brain cool. The shape of the skeleton gives the giraffe a small lung volume relative to its mass. Its long neck gives it a large amount of dead space, in spite of its narrow windpipe. The giraffe also has a high tidal volume, so the balance of dead space and tidal volume is much the same as other mammals. The animal can still provide enough oxygen for its tissues, and it can increase its respiratory rate and oxygen diffusion when running.
The giraffe's circulatory system has several adaptations to compensate for its great height. Its and heart must generate approximately double the blood pressure required for a human to maintain blood flow to the brain. As such, the wall of the heart can be as thick as . Giraffes have relatively high heart rates for their size, at 150 beats per minute. When the animal lowers its head, the blood rushes down fairly unopposed and a rete mirabile in the upper neck, with its large cross-sectional area, prevents excess blood flow to the brain. When it raises again, the blood vessels constrict and push blood into the brain so the animal does not faint. The jugular veins contain several (most commonly seven) valves to prevent blood flowing back into the head from the inferior vena cava and right atrium while the head is lowered. Conversely, the blood vessels in the lower legs are under great pressure because of the weight of fluid pressing down on them. To solve this problem, the skin of the lower legs is thick and tight, preventing too much blood from pouring into them.
Giraffes have oesophageal muscles that are strong enough to allow regurgitation of food from the stomach up the neck and into the mouth for rumination. They have four-chambered stomachs, which are adapted to their specialized diet. The intestines of an adult giraffe measure more than in length and have a relatively small ratio of small to large intestine. The giraffe has a small, compact liver. In fetuses there may be a small gallbladder that vanishes before birth.
Behaviour and ecology
Habitat and feeding
Giraffes usually inhabit savannahs and open woodlands. They prefer areas dominated by Acacieae, Commiphora, Combretum and Terminalia tree over Brachystegia which are more densely spaced. The Angolan giraffe can be found in desert environments. Giraffes browse on the twigs of trees, preferring those of the subfamily Acacieae and the genera Commiphora and Terminalia, which are important sources of calcium and protein to sustain the giraffe's growth rate. They also feed on shrubs, grass and fruit. A giraffe eats around of plant matter daily. When stressed, giraffes may chew on large branches, stripping them of bark. Giraffes are also recorded to chew old bones.
During the wet season, food is abundant and giraffes are more spread out, while during the dry season, they gather around the remaining evergreen trees and bushes. Mothers tend to feed in open areas, presumably to make it easier to detect predators, although this may reduce their feeding efficiency. As a ruminant, the giraffe first chews its food, then swallows it for processing and then visibly passes the half-digested cud up the neck and back into the mouth to chew again. The giraffe requires less food than many other herbivores because the foliage it eats has more concentrated nutrients and it has a more efficient digestive system. The animal's faeces come in the form of small pellets. When it has access to water, a giraffe will go no more than three days without drinking.
Giraffes have a great effect on the trees that they feed on, delaying the growth of young trees for some years and giving "waistlines" to particularly tall trees. Feeding is at its highest during the first and last hours of daytime. Between these hours, giraffes mostly stand and ruminate. Rumination is the dominant activity during the night, when it is mostly done lying down.
Social life
Giraffes usually form groups that vary in size and composition according to ecological, anthropogenic, temporal, and social factors. Traditionally, the composition of these groups had been described as open and ever-changing. For research purposes, a "group" has been defined as "a collection of individuals that are less than a kilometre apart and moving in the same general direction". More recent studies have found that giraffes have long-lasting social groups or cliques based on kinship, sex or other factors, and these groups regularly associate with other groups in larger communities or sub-communities within a fission–fusion society. Proximity to humans can disrupt social arrangements. Masai giraffes in Tanzania sort themselves into different subpopulations of 60–90 adult females with overlapping ranges, each of which differ in reproductive rates and calf mortality. Dispersal is male biased, and can include spatial and/or social dispersal. Adult female subpopulations are connected by males into super communities of around 300 animals.
The number of giraffes in a group can range from one up to 66 individuals. Giraffe groups tend to be sex-segregated although mixed-sex groups made of adult females and young males also occur. Female groups may be matrilineally related. Generally, females are more selective than males when deciding which individuals of the same sex they associate with. Particularly stable giraffe groups are those made of mothers and their young, which can last weeks or months. Young males also form groups and will engage in playfights. However, as they get older, males become more solitary but may also associate in pairs or with female groups. Giraffes are not territorial, but they have home ranges that vary according to rainfall and proximity to human settlements. Male giraffes occasionally roam far from areas that they normally frequent.
Early biologists suggested giraffes were mute and unable to create enough air flow to vibrate their vocal folds. This has been proved to the contrary; they have been recorded to communicate using snorts, sneezes, coughs, snores, hisses, bursts, moans, grunts, growls and flute-like sounds. During courtship, males emit loud coughs. Females call their young by bellowing. Calves will emit bleats, mooing and mewing sounds. Snorting and hissing is associated with vigilance. During nighttime, giraffes appear to hum to each other. There is some evidence that giraffes use Helmholtz resonance to create infrasound. They also communicate with body language. Dominant males display to other males with an erect posture; holding the chin and head up while walking stiffly and displaying their side. The less dominant show submissiveness by dropping the head and ears, lowering the chin and fleeing.
Reproduction and parental care
Reproduction in giraffes is broadly polygamous: a few older males mate with the fertile females. Females can reproduce throughout the year and experience oestrus cycling approximately every 15 days. Female giraffes in oestrus are dispersed over space and time, so reproductive adult males adopt a strategy of roaming among female groups to seek mating opportunities, with periodic hormone-induced rutting behaviour approximately every two weeks. Males prefer young adult females over juveniles and older adults.
Male giraffes assess female fertility by tasting the female's urine to detect oestrus, in a multi-step process known as the flehmen response. Once an oestrous female is detected, the male will attempt to court her. When courting, dominant males will keep subordinate ones at bay. A courting male may lick a female's tail, lay his head and neck on her body or nudge her with his ossicones. During copulation, the male stands on his hind legs with his head held up and his front legs resting on the female's sides.
Giraffe gestation lasts 400–460 days, after which a single calf is normally born, although twins occur on rare occasions. The mother gives birth standing up. The calf emerges head and front legs first, having broken through the fetal membranes, and falls to the ground, severing the umbilical cord. A newborn giraffe is tall. Within a few hours of birth, the calf can run around and is almost indistinguishable from a one-week-old. However, for the first one to three weeks, it spends most of its time hiding, its coat pattern providing camouflage. The ossicones, which have lain flat in the womb, raise up in a few days.
Mothers with calves will gather in nursery herds, moving or browsing together. Mothers in such a group may sometimes leave their calves with one female while they forage and drink elsewhere. This is known as a "calving pool". Calves are at risk of predation, and a mother giraffe will stand over them and kick at an approaching predator. Females watching calving pools will only alert their own young if they detect a disturbance, although the others will take notice and follow. Allo-sucking, where a calf will suckle a female other than its mother, has been recorded in both wild and captive giraffes. Calves first ruminate at four to six months and stop nursing at six to eight months. Young may not reach independence until they are 14 months old. Females are able to reproduce at four years of age, while spermatogenesis in males begins at three to four years of age. Males must wait until they are at least seven years old to gain the opportunity to mate.
Necking
Male giraffes use their necks as weapons in combat, a behaviour known as "necking". Necking is used to establish dominance, and males that win necking bouts have greater reproductive success. This behaviour occurs at low or high intensity. In low-intensity necking, the combatants rub and lean on each other. The male that can keep itself more upright wins the bout. In high-intensity necking, the combatants will spread their front legs and swing their necks at each other, attempting to land blows with their ossicones. The contestants will try to dodge each other's blows and then prepare to counter. The power of a blow depends on the weight of the skull and the arc of the swing. A necking duel can last more than half an hour, depending on how well matched the combatants are. Although most fights do not lead to serious injury, there have been records of broken jaws, broken necks, and even deaths.
After a duel, it is common for two male giraffes to caress and court each other. Such interactions between males have been found to be more frequent than heterosexual coupling. In one study, up to 94 percent of observed mounting incidents took place between males. The proportion of same-sex activities varied from 30 to 75 percent. Only one percent of same-sex mounting incidents occurred between females.
Mortality and health
Giraffes have high adult survival probability, and an unusually long lifespan compared to other ruminants, up to 38 years. Adult female survival is significantly correlated with the number of social associations. Because of their size, eyesight and powerful kicks, adult giraffes are mostly safe from predation, with lions being their only major threats. Calves are much more vulnerable than adults and are also preyed on by leopards, spotted hyenas and wild dogs. A quarter to a half of giraffe calves reach adulthood. Calf survival varies according to the season of birth, with calves born during the dry season having higher survival rates.
The local, seasonal presence of large herds of migratory wildebeests and zebras reduces predation pressure on giraffe calves and increases their survival probability. In turn, it has been suggested that other ungulates may benefit from associating with giraffes, as their height allows them to spot predators from further away. Zebras were found to assess predation risk by watching giraffes and spend less time looking around when giraffes are present.
Some parasites feed on giraffes. They are often hosts for ticks, especially in the area around the genitals, which have thinner skin than other areas. Tick species that commonly feed on giraffes are those of genera Hyalomma, Amblyomma and Rhipicephalus. red-billed and yellow-billed oxpeckers clean giraffes of ticks and alert them to danger. Giraffes host numerous species of internal parasites and are susceptible to various diseases. They were victims of the (now eradicated) viral illness rinderpest. Giraffes can also suffer from a skin disorder, which comes in the form of wrinkles, lesions or raw fissures. As much as 79% of giraffes have symptoms of the disease in Ruaha National Park, but it did not cause mortality in Tarangire and is less prevalent in areas with fertile soils.
Human relations
Cultural significance
With its lanky build and spotted coat, the giraffe has been a source of fascination throughout human history, and its image is widespread in culture. It has represented flexibility, far-sightedness, femininity, fragility, passivity, grace, beauty and the continent of Africa itself.
Giraffes were depicted in art throughout the African continent, including that of the Kiffians, Egyptians, and Kushites. The Kiffians were responsible for a life-size rock engraving of two giraffes, dated 8,000 years ago, that has been called the "world's largest rock art petroglyph". How the giraffe got its height has been the subject of various African folktales. The Tugen people of modern Kenya used the giraffe to depict their god Mda. The Egyptians gave the giraffe its own hieroglyph; 'sr' in Old Egyptian and 'mmy' in later periods.
Giraffes have a presence in modern Western culture. Salvador Dalí depicted them with burning manes in some surrealist paintings. Dali considered the giraffe to be a masculine symbol. A flaming giraffe was meant to be a "masculine cosmic apocalyptic monster". Several children's books feature the giraffe, including David A. Ufer's The Giraffe Who Was Afraid of Heights, Giles Andreae's Giraffes Can't Dance and Roald Dahl's The Giraffe and the Pelly and Me. Giraffes have appeared in animated films as minor characters in Disney's Dumbo and The Lion King, and in more prominent roles in The Wild and the Madagascar films. Sophie the Giraffe has been a popular teether since 1961. Another famous fictional giraffe is the Toys "R" Us mascot Geoffrey the Giraffe.
The giraffe has also been used for some scientific experiments and discoveries. Scientists have used the properties of giraffe skin as a model for astronaut and fighter pilot suits because the people in these professions are in danger of passing out if blood rushes to their legs. Computer scientists have modeled the coat patterns of several subspecies using reaction–diffusion mechanisms. The constellation of Camelopardalis, introduced in the 17th century, depicts a giraffe. The Tswana people of Botswana traditionally see the constellation Crux as two giraffes—Acrux and Mimosa forming a male, and Gacrux and Delta Crucis forming the female.
Captivity
The Egyptians were among the earliest people to keep giraffes in captivity and shipped them around the Mediterranean. The giraffe was among the many animals collected and displayed by the Romans. The first one in Rome was brought in by Julius Caesar in 46 BC. With the fall of the Western Roman Empire, the housing of giraffes in Europe declined. During the Middle Ages, giraffes were known to Europeans through contact with the Arabs, who revered the giraffe for its peculiar appearance.
Individual captive giraffes were given celebrity status throughout history. In 1414, a giraffe from Malindi was taken to China by explorer Zheng He and placed in a Ming dynasty zoo. The animal was a source of fascination for the Chinese people, who associated it with the mythical Qilin. The Medici giraffe was a giraffe presented to Lorenzo de' Medici in 1486. It caused a great stir on its arrival in Florence. Zarafa, another famous giraffe, was brought from Egypt to Paris in the early 19th century as a gift for Charles X of France. A sensation, the giraffe was the subject of numerous memorabilia or "giraffanalia".
Giraffes have become popular attractions in modern zoos, though keeping them is difficult as they prefer large areas and need to eat large amounts of browse. Captive giraffes in North America and Europe appear to have a higher mortality rate than in the wild, the most common causes being poor husbandry, nutrition, and management. Giraffes in zoos display stereotypical behaviours, particularly the licking of inanimate objects and pacing. Zookeepers may offer various activities to stimulate giraffes, including training them to take food from visitors. Stables for giraffes are built particularly high to accommodate their height.
Exploitation
Giraffes were probably common targets for hunters throughout Africa. Different parts of their bodies were used for different purposes. Their meat was used for food. The tail hairs were flyswatters, bracelets, necklaces, and threads. Shields, sandals, and drums were made using the skin, and the strings of musical instruments were from the tendons. In Buganda, the smoke of burning giraffe skin was traditionally used to treat nosebleeds. The Humr people of Kordofan consume the drink Umm Nyolokh, which is prepared from the liver and bone marrow of giraffes. Richard Rudgley hypothesised that Umm Nyolokh might contain DMT. The drink is said to cause hallucinations of giraffes, believed to be the giraffes' ghosts, by the Humr.
Conservation status
In 2016, giraffes were assessed as Vulnerable from a conservation perspective by the IUCN. In 1985, it was estimated there were 155,000 giraffes in the wild. This declined to over 140,000 in 1999. Estimates as of 2016 indicate there are approximately 97,500 members of Giraffa in the wild. The Masai and reticulated subspecies are endangered, and the Rothschild subspecies is near threatened. The Nubian subspecies is critically endangered.
The primary causes for giraffe population declines are habitat loss and direct killing for bushmeat markets. Giraffes have been extirpated from much of their historic range, including Eritrea, Guinea, Mauritania and Senegal. They may also have disappeared from Angola, Mali, and Nigeria, but have been introduced to Rwanda and Eswatini. , there were more than 1,600 in captivity at Species360-registered zoos. Habitat destruction has hurt the giraffe. In the Sahel, the need for firewood and grazing room for livestock has led to deforestation. Normally, giraffes can coexist with livestock, since they avoid direct competition by feeding above them. In 2017, severe droughts in northern Kenya led to increased tensions over land and the killing of wildlife by herders, with giraffe populations being particularly hit.
Protected areas like national parks provide important habitat and anti-poaching protection to giraffe populations. Community-based conservation efforts outside national parks are also effective at protecting giraffes and their habitats. Private game reserves have contributed to the preservation of giraffe populations in eastern and southern Africa. The giraffe is a protected species in most of its range. It is the national animal of Tanzania, and is protected by law, and unauthorised killing can result in imprisonment. The UN-backed Convention of Migratory Species selected giraffes for protection in 2017. In 2019, giraffes were listed under Appendix II of the Convention on International Trade in Endangered Species (CITES), which means international trade including in parts/derivatives is regulated.
Translocations are sometimes used to augment or re-establish diminished or extirpated populations, but these activities are risky and difficult to undertake using the best practices of extensive pre- and post-translocation studies and ensuring a viable founding population. Aerial survey is the most common method of monitoring giraffe population trends in the vast roadless tracts of African landscapes, but aerial methods are known to undercount giraffes. Ground-based survey methods are more accurate and can be used in conjunction with aerial surveys to make accurate estimates of population sizes and trends.
| Biology and health sciences | Artiodactyla | null |
12733 | https://en.wikipedia.org/wiki/Giant%20planet | Giant planet | A giant planet, sometimes referred to as a jovian planet (Jove being another name for the Roman god Jupiter), is a diverse type of planet much larger than Earth. Giant planets are usually primarily composed of low-boiling point materials (volatiles), rather than rock or other solid matter, but massive solid planets can also exist. There are four such planets in the Solar System: Jupiter, Saturn, Uranus, and Neptune. Many extrasolar giant planets have been identified.
Giant planets are sometimes known as gas giants, but many astronomers now apply the term only to Jupiter and Saturn, classifying Uranus and Neptune, which have different compositions, as ice giants. Both names are potentially misleading; the Solar System's giant planets all consist primarily of fluids above their critical points, where distinct gas and liquid phases do not exist. Jupiter and Saturn are principally made of hydrogen and helium, whilst Uranus and Neptune consist of water, ammonia, and methane.
The defining differences between a very low-mass brown dwarf and a massive gas giant () are debated. One school of thought is based on planetary formation; the other, on the physics of the interior of planets. Part of the debate concerns whether brown dwarfs must, by definition, have experienced nuclear fusion at some point in their history.
Terminology
The term gas giant was coined in 1952 by science fiction writer James Blish and was originally used to refer to all giant planets. Arguably it is something of a misnomer, because throughout most of the volume of these planets the pressure is so high that matter is not in gaseous form. Other than the upper layers of the atmosphere, all matter is likely beyond the critical point, where there is no distinction between liquids and gases. Fluid planet would be a more accurate term. Jupiter also has metallic hydrogen near its center, but much of its volume is hydrogen, helium, and traces of other gases above their critical points. The observable atmospheres of all these planets (at less than a unit optical depth) are quite thin compared to their radii, only extending perhaps one percent of the way to the center. Thus, the observable parts are gaseous (in contrast to Mars and Earth, which have gaseous atmospheres through which the crust can be seen).
The rather misleading term has caught on because planetary scientists typically use rock, gas, and ice as shorthands for classes of elements and compounds commonly found as planetary constituents, irrespective of the matter's phase. In the outer Solar System, hydrogen and helium are referred to as gas; water, methane, and ammonia as ice; and silicates and metals as rock. When deep planetary interiors are considered, it may not be far off to say that, by ice astronomers mean oxygen and carbon, by rock they mean silicon, and by gas they mean hydrogen and helium. The many ways in which Uranus and Neptune differ from Jupiter and Saturn have led some to use the term only for planets similar to the latter two. With this terminology in mind, some astronomers have started referring to Uranus and Neptune as ice giants to indicate the predominance of the ices (in fluid form) in their interior composition.
The alternative term jovian planet refers to the Roman god Jupiter—the genitive form of which is Jovis, hence Jovian—and was intended to indicate that all of these planets were similar to Jupiter.
Objects large enough to start deuterium fusion (above 13 Jupiter masses for solar composition) are called brown dwarfs, and these occupy the mass range between that of large giant planets and the lowest-mass stars. The 13-Jupiter-mass () cutoff is a rule of thumb rather than something of precise physical significance. Larger objects will burn most of their deuterium and smaller ones will burn only a little, and the value is somewhere in between. The amount of deuterium burnt depends not only on the mass but also on the composition of the planet, especially on the amount of helium and deuterium present. The Extrasolar Planets Encyclopaedia includes objects up to 60 Jupiter masses, and the Exoplanet Data Explorer up to 24 Jupiter masses.
Description
A giant planet is a massive planet and has a thick atmosphere of hydrogen and helium. They may have a condensed "core" of heavier elements, delivered during the formation process. This core may be partially or completely dissolved and dispersed throughout the hydrogen/helium envelope. In "traditional" giant planets such as Jupiter and Saturn (the gas giants) hydrogen and helium make up most of the mass of the planet, whereas they only make up an outer envelope on Uranus and Neptune, which are instead mostly composed of water, ammonia, and methane and therefore increasingly referred to as "ice giants".
Extrasolar giant planets that orbit very close to their stars are the exoplanets that are easiest to detect. These are called hot Jupiters and hot Neptunes because they have very high surface temperatures. Hot Jupiters were, until the advent of space-borne telescopes, the most common form of exoplanet known, due to the relative ease of detecting them with ground-based instruments.
Giant planets are commonly said to lack solid surfaces, but it is more accurate to say that they lack surfaces altogether since the gases that form them simply become thinner and thinner with increasing distance from the planets' centers, eventually becoming indistinguishable from the interplanetary medium. Therefore, landing on a giant planet may or may not be possible, depending on the size and composition of its core.
Subtypes
Gas giants
Gas giants consist mostly of hydrogen and helium. The Solar System's gas giants, Jupiter and Saturn, have heavier elements making up between 3 and 13 percent of their mass. Gas giants are thought to consist of an outer layer of molecular hydrogen, surrounding a layer of liquid metallic hydrogen, with a probable molten core with a rocky composition.
Jupiter and Saturn's outermost portion of the hydrogen atmosphere has many layers of visible clouds that are mostly composed of water and ammonia. The layer of metallic hydrogen makes up the bulk of each planet, and is referred to as "metallic" because the very high pressure turns hydrogen into an electrical conductor. The core is thought to consist of heavier elements at such high temperatures (20,000 K) and pressures that their properties are poorly understood.
Ice giants
Ice giants have distinctly different interior compositions from gas giants. The Solar System's ice giants, Uranus and Neptune, have a hydrogen-rich atmosphere that extends from the cloud tops down to about 80% (Uranus) or 85% (Neptune) of their radius. Below this, they are predominantly "icy", i.e. consisting mostly of water, methane, and ammonia. There is also some rock and gas, but various proportions of ice–rock–gas could mimic pure ice, so that the exact proportions are unknown.
Uranus and Neptune have very hazy atmospheric layers with small amounts of methane, giving them light aquamarine colors. Both have magnetic fields that are sharply inclined to their axes of rotation.
Unlike the other giant planets, Uranus has an extreme tilt that causes its seasons to be severely pronounced. The two planets also have other subtle but important differences. Uranus has more hydrogen and helium than Neptune despite being less massive overall. Neptune is therefore denser and has much more internal heat and a more active atmosphere. The Nice model, in fact, suggests that Neptune formed closer to the Sun than Uranus did, and should therefore have more heavy elements.
Massive solid planets
Massive solid planets seemingly can also exist, though their formation mechanisms and occurrence remain subjects of ongoing research and debate.
The possibility of solid planets up to thousands of Earth masses forming around massive stars (B-type and O-type stars; 5–120 solar masses) has been suggested in some earlier studies. The hypothesis proposed that the protoplanetary disk around such stars would contain enough heavy elements, and that high UV radiation and strong winds could photoevaporate the gas in the disk, leaving just the heavy elements. For comparison, Neptune's mass equals 17 Earth masses, Jupiter has 318 Earth masses, and the 13 Jupiter-mass limit used in the IAU's working definition of an exoplanet equals approximately 4000 Earth masses.
However, it is important to note that more recent research has called into question the likelihood of massive solid planet formation around very massive stars(https://arxiv.org/pdf/1103.0556). Studies have shown that the ratio of protoplanetary disk mass to stellar mass decreases rapidly for stars above 10 solar masses, falling to less than 10^-4. Furthermore, no protoplanetary disks have been observed around O-type stars to date.
The original suggestion of massive solid planets forming around 5-120 solar mass stars, presented in earlier literature, lacks substantial supporting evidence or citations to planetary formation theories. The study in question primarily focused on simulating mass-radius relationships for rocky planets, including hypothetical super-massive solid planets, but did not investigate whether planetary formation theories actually support the existence of such objects. The authors of that study acknowledged that "Such massive exoplanets are not yet known to exist."
Given these considerations, the formation and existence of massive solid planets around very massive stars remain speculative and require further research and observational evidence.
Super-Puffs
A super-puff is a type of exoplanet with a mass only a few times larger than
Earth’s but a radius larger than Neptune, giving it a very low mean density. They are cooler and less massive than the inflated low-density hot-Jupiters. The most extreme examples known are the three planets around Kepler-51 which are all Jupiter-sized but with densities below 0.1 g/cm3.
Extrasolar giant planets
Because of the limited techniques currently available to detect exoplanets, many of those found to date have been of a size associated, in the Solar System, with giant planets. Because these large planets are inferred to share more in common with Jupiter than with the other giant planets, some have claimed that "jovian planet" is a more accurate term for them. Many of the exoplanets are much closer to their parent stars and hence much hotter than the giant planets in the Solar System, making it possible that some of those planets are a type not observed in the Solar System. Considering the relative abundances of the elements in the universe (approximately 98% hydrogen and helium) it would be surprising to find a predominantly rocky planet more massive than Jupiter. On the other hand, models of planetary-system formation have suggested that giant planets would be inhibited from forming as close to their stars as many of the extrasolar giant planets have been observed to orbit.
Atmospheres
The bands seen in the atmosphere of Jupiter are due to counter-circulating streams of material called zones and belts, encircling the planet parallel to its equator. The zones are the lighter bands, and are at higher altitudes in the atmosphere. They have an internal updraft and are high-pressure regions. The belts are the darker bands, are lower in the atmosphere, and have an internal downdraft. They are low-pressure regions. These structures are somewhat analogous to the high and low-pressure cells in Earth's atmosphere, but they have a very different structure—latitudinal bands that circle the entire planet, as opposed to small confined cells of pressure. This appears to be a result of the rapid rotation and underlying symmetry of the planet. There are no oceans or landmasses to cause local heating and the rotation speed is much higher than that of Earth.
There are smaller structures as well: spots of different sizes and colors. On Jupiter, the most noticeable of these features is the Great Red Spot, which has been present for at least 300 years. These structures are huge storms. Some such spots are thunderheads as well.
| Physical sciences | Planetary science | null |
12737 | https://en.wikipedia.org/wiki/Gunpowder | Gunpowder | Gunpowder, also commonly known as black powder to distinguish it from modern smokeless powder, is the earliest known chemical explosive. It consists of a mixture of sulfur, charcoal (which is mostly carbon), and potassium nitrate (saltpeter). The sulfur and charcoal act as fuels while the saltpeter is an oxidizer. Gunpowder has been widely used as a propellant in firearms, artillery, rocketry, and pyrotechnics, including use as a blasting agent for explosives in quarrying, mining, building pipelines, tunnels, and roads.
Gunpowder is classified as a low explosive because of its relatively slow decomposition rate, low ignition temperature and consequently low brisance (breaking/shattering). Low explosives deflagrate (i.e., burn at subsonic speeds), whereas high explosives detonate, producing a supersonic shockwave. Ignition of gunpowder packed behind a projectile generates enough pressure to force the shot from the muzzle at high speed, but usually not enough force to rupture the gun barrel. It thus makes a good propellant but is less suitable for shattering rock or fortifications with its low-yield explosive power. Nonetheless, it was widely used to fill fused artillery shells (and used in mining and civil engineering projects) until the second half of the 19th century, when the first high explosives were put into use.
Gunpowder is one of the Four Great Inventions of China. Originally developed by Taoists for medicinal purposes, it was first used for warfare around AD 904. Its use in weapons has declined due to smokeless powder replacing it, whilst its relative inefficiency led to newer alternatives such as dynamite and ammonium nitrate/fuel oil replacing it in industrial applications.
Effect
Gunpowder is a low explosive: it does not detonate, but rather deflagrates (burns quickly). This is an advantage in a propellant device, where one does not desire a shock that would shatter the gun and potentially harm the operator; however, it is a drawback when an explosion is desired. In that case, the propellant (and most importantly, gases produced by its burning) must be confined. Since it contains its own oxidizer and additionally burns faster under pressure, its combustion is capable of bursting containers such as a shell, grenade, or improvised "pipe bomb" or "pressure cooker" casings to form shrapnel.
In quarrying, high explosives are generally preferred for shattering rock. However, because of its low brisance, gunpowder causes fewer fractures and results in more usable stone compared to other explosives, making it useful for blasting slate, which is fragile, or monumental stone such as granite and marble. Gunpowder is well suited for blank rounds, signal flares, burst charges, and rescue-line launches. It is also used in fireworks for lifting shells, in rockets as fuel, and in certain special effects.
Combustion converts less than half the mass of gunpowder to gas; most of it turns into particulate matter. Some of it is ejected, wasting propelling power, fouling the air, and generally being a nuisance (giving away a soldier's position, generating fog that hinders vision, etc.). Some of it ends up as a thick layer of soot inside the barrel, where it also is a nuisance for subsequent shots, and a cause of jamming an automatic weapon. Moreover, this residue is hygroscopic, and with the addition of moisture absorbed from the air forms a corrosive substance. The soot contains potassium oxide or sodium oxide that turns into potassium hydroxide, or sodium hydroxide, which corrodes wrought iron or steel gun barrels. Gunpowder arms therefore require thorough and regular cleaning to remove the residue.
Gunpowder loads can be used in modern firearms as long as they are not gas-operated. The most compatible modern guns are smoothbore-barreled shotguns that are long-recoil operated with chrome-plated essential parts such as barrels and bores. Such guns have minimal fouling and corrosion and are easier to clean.
History
China
The first confirmed reference to what can be considered gunpowder in China occurred in the 9th century AD during the Tang dynasty, first in a formula contained in the Taishang Shengzu Jindan Mijue (太上聖祖金丹秘訣) in 808, and then about 50 years later in a Taoist text known as the Zhenyuan miaodao yaolüe (真元妙道要略). The Taishang Shengzu Jindan Mijue mentions a formula composed of six parts sulfur to six parts saltpeter to one part birthwort herb. According to the Zhenyuan miaodao yaolüe, "Some have heated together sulfur, realgar and saltpeter with honey; smoke and flames result, so that their hands and faces have been burnt, and even the whole house where they were working burned down." Based on these Taoist texts, the invention of gunpowder by Chinese alchemists was likely an accidental byproduct from experiments seeking to create the elixir of life. This experimental medicine origin is reflected in its Chinese name huoyao (), which means "fire medicine". Saltpeter was known to the Chinese by the mid-1st century AD and was primarily produced in the provinces of Sichuan, Shanxi, and Shandong. There is strong evidence of the use of saltpeter and sulfur in various medicinal combinations. A Chinese alchemical text dated 492 noted saltpeter burnt with a purple flame, providing a practical and reliable means of distinguishing it from other inorganic salts, thus enabling alchemists to evaluate and compare purification techniques; the earliest Latin accounts of saltpeter purification are dated after 1200.
The earliest chemical formula for gunpowder appeared in the 11th century Song dynasty text, Wujing Zongyao (Complete Essentials from the Military Classics), written by Zeng Gongliang between 1040 and 1044. The Wujing Zongyao provides encyclopedia references to a variety of mixtures that included petrochemicals—as well as garlic and honey. A slow match for flame-throwing mechanisms using the siphon principle and for fireworks and rockets is mentioned. The mixture formulas in this book contain at most 50% not enough to create an explosion, they produce an incendiary instead. The Essentials was written by a Song dynasty court bureaucrat and there is little evidence that it had any immediate impact on warfare; there is no mention of its use in the chronicles of the wars against the Tanguts in the 11th century, and China was otherwise mostly at peace during this century. However, it had already been used for fire arrows since at least the 10th century. Its first recorded military application dates its use to 904 in the form of incendiary projectiles. In the following centuries various gunpowder weapons such as bombs, fire lances, and the gun appeared in China. Explosive weapons such as bombs have been discovered in a shipwreck off the shore of Japan dated from 1281, during the Mongol invasions of Japan.
By 1083 the Song court was producing hundreds of thousands of fire arrows for their garrisons. Bombs and the first proto-guns, known as "fire lances", became prominent during the 12th century and were used by the Song during the Jin-Song Wars. Fire lances were first recorded to have been used at the Siege of De'an in 1132 by Song forces against the Jin. In the early 13th century the Jin used iron-casing bombs. Projectiles were added to fire lances, and re-usable fire lance barrels were developed, first out of hardened paper, and then metal. By 1257 some fire lances were firing wads of bullets. In the late 13th century metal fire lances became 'eruptors', proto-cannons firing co-viative projectiles (mixed with the propellant, rather than seated over it with a wad), and by 1287 at the latest, had become true guns, the hand cannon.
Middle East
According to Iqtidar Alam Khan, it was invading Mongols who introduced gunpowder to the Islamic world. The Muslims acquired knowledge of gunpowder sometime between 1240 and 1280, by which point the Syrian Hasan al-Rammah had written recipes, instructions for the purification of saltpeter, and descriptions of gunpowder incendiaries. It is implied by al-Rammah's usage of "terms that suggested he derived his knowledge from Chinese sources" and his references to saltpeter as "Chinese snow" ( ), fireworks as "Chinese flowers", and rockets as "Chinese arrows" that knowledge of gunpowder arrived from China. However, because al-Rammah attributes his material to "his father and forefathers", al-Hassan argues that gunpowder became prevalent in Syria and Egypt by "the end of the twelfth century or the beginning of the thirteenth". In Persia saltpeter was known as "Chinese salt" () namak-i chīnī) or "salt from Chinese salt marshes" ( ).
Hasan al-Rammah included 107 gunpowder recipes in his text al-Furusiyyah wa al-Manasib al-Harbiyya (The Book of Military Horsemanship and Ingenious War Devices), 22 of which are for rockets. If one takes the median of 17 of these 22 compositions for rockets (75% nitrates, 9.06% sulfur, and 15.94% charcoal), it is nearly identical to the modern reported ideal recipe of 75% potassium nitrate, 10% sulfur, and 15% charcoal. The text also mentions fuses, incendiary bombs, naphtha pots, fire lances, and an illustration and description of the earliest torpedo. The torpedo was called the "egg which moves itself and burns". Two iron sheets were fastened together and tightened using felt. The flattened pear-shaped vessel was filled with gunpowder, metal filings, "good mixtures", two rods, and a large rocket for propulsion. Judging by the illustration, it was evidently supposed to glide across the water. Fire lances were used in battles between the Muslims and Mongols in 1299 and 1303.
Al-Hassan claims that in the Battle of Ain Jalut of 1260, the Mamluks used "the first cannon in history" against the Mongols, utilizing a formula with near-identical ideal composition ratios for explosive gunpowder. Other historians urge caution regarding claims of Islamic firearms use in the 1204–1324 period, as late medieval Arabic texts used the same word for gunpowder, naft, that they used for an earlier incendiary, naphtha.
The earliest surviving documentary evidence for cannons in the Islamic world is from an Arabic manuscript dated to the early 14th century. The author's name is uncertain but may have been Shams al-Din Muhammad, who died in 1350. Dating from around 1320–1350, the illustrations show gunpowder weapons such as gunpowder arrows, bombs, fire tubes, and fire lances or proto-guns. The manuscript describes a type of gunpowder weapon called a midfa which uses gunpowder to shoot projectiles out of a tube at the end of a stock. Some consider this to be a cannon while others do not. The problem with identifying cannons in early 14th century Arabic texts is the term midfa, which appears from 1342 to 1352 but cannot be proven to be true hand-guns or bombards. Contemporary accounts of a metal-barrel cannon in the Islamic world do not occur until 1365. Needham believes that in its original form the term midfa refers to the tube or cylinder of a naphtha projector (flamethrower), then after the invention of gunpowder it meant the tube of fire lances, and eventually it applied to the cylinder of hand-guns and cannons.
According to Paul E. J. Hammer, the Mamluks certainly used cannons by 1342. According to J. Lavin, cannons were used by Moors at the siege of Algeciras in 1343. A metal cannon firing an iron ball was described by Shihab al-Din Abu al-Abbas al-Qalqashandi between 1365 and 1376.
The musket appeared in the Ottoman Empire by 1465. In 1598, Chinese writer Zhao Shizhen described Turkish muskets as being superior to European muskets. The Chinese military book Wu Pei Chih (1621) later described Turkish muskets that used a rack-and-pinion mechanism, which was not known to have been used in European or Chinese firearms at the time.
The state-controlled manufacture of gunpowder by the Ottoman Empire through early supply chains to obtain nitre, sulfur and high-quality charcoal from oaks in Anatolia contributed significantly to its expansion between the 15th and 18th century. It was not until later in the 19th century when the syndicalist production of Turkish gunpowder was greatly reduced, which coincided with the decline of its military might.
Europe
The earliest Western accounts of gunpowder appear in texts written by English philosopher Roger Bacon in 1267 called and Opus Tertium. The oldest written recipes in continental Europe were recorded under the name Marcus Graecus or Mark the Greek between 1280 and 1300 in the Liber Ignium, or Book of Fires.
Some sources mention possible gunpowder weapons being deployed by the Mongols against European forces at the Battle of Mohi in 1241. Professor Kenneth Warren Chase credits the Mongols for introducing into Europe gunpowder and its associated weaponry. However, there is no clear route of transmission, and while the Mongols are often pointed to as the likeliest vector, Timothy May points out that "there is no concrete evidence that the Mongols used gunpowder weapons on a regular basis outside of China." May also states, "however [, ...] the Mongols used the gunpowder weapon in their wars against the Jin, the Song and in their invasions of Japan."
Records show that, in England, gunpowder was being made in 1346 at the Tower of London; a powder house existed at the Tower in 1461, and in 1515 three King's gunpowder makers worked there. Gunpowder was also being made or stored at other royal castles, such as Portchester. The English Civil War (1642–1645) led to an expansion of the gunpowder industry, with the repeal of the Royal Patent in August 1641.
In late 14th century Europe, gunpowder was improved by corning, the practice of drying it into small clumps to improve combustion and consistency. During this time, European manufacturers also began regularly purifying saltpeter, using wood ashes containing potassium carbonate to precipitate calcium from their dung liquor, and using ox blood, alum, and slices of turnip to clarify the solution.
During the Renaissance, two European schools of pyrotechnic thought emerged, one in Italy and the other at Nuremberg, Germany. In Italy, Vannoccio Biringuccio, born in 1480, was a member of the guild Fraternita di Santa Barbara but broke with the tradition of secrecy by setting down everything he knew in a book titled De la pirotechnia, written in vernacular. It was published posthumously in 1540, with 9 editions over 138 years, and also reprinted by MIT Press in 1966.
By the mid-17th century fireworks were used for entertainment on an unprecedented scale in Europe, being popular even at resorts and public gardens. With the publication of Deutliche Anweisung zur Feuerwerkerey (1748), methods for creating fireworks were sufficiently well-known and well-described that "Firework making has become an exact science." In 1774 Louis XVI ascended to the throne of France at age 20. After he discovered that France was not self-sufficient in gunpowder, a Gunpowder Administration was established; to head it, the lawyer Antoine Lavoisier was appointed. Although from a bourgeois family, after his degree in law Lavoisier became wealthy from a company set up to collect taxes for the Crown; this allowed him to pursue experimental natural science as a hobby.
Without access to cheap saltpeter (controlled by the British), for hundreds of years France had relied on saltpetremen with royal warrants, the droit de fouille or "right to dig", to seize nitrous-containing soil and demolish walls of barnyards, without compensation to the owners. This caused farmers, the wealthy, or entire villages to bribe the petermen and the associated bureaucracy to leave their buildings alone and the saltpeter uncollected. Lavoisier instituted a crash program to increase saltpeter production, revised (and later eliminated) the droit de fouille, researched best refining and powder manufacturing methods, instituted management and record-keeping, and established pricing that encouraged private investment in works. Although saltpeter from new Prussian-style putrefaction works had not been produced yet (the process taking about 18 months), in only a year France had gunpowder to export. A chief beneficiary of this surplus was the American Revolution. By careful testing and adjusting the proportions and grinding time, powder from mills such as at Essonne outside Paris became the best in the world by 1788, and inexpensive.
Two British physicists, Andrew Noble and Frederick Abel, worked to improve the properties of gunpowder during the late 19th century. This formed the basis for the Noble-Abel gas equation for internal ballistics.
The introduction of smokeless powder in the late 19th century led to a contraction of the gunpowder industry. After the end of World War I, the majority of the British gunpowder manufacturers merged into a single company, "Explosives Trades limited", and a number of sites were closed down, including those in Ireland. This company became Nobel Industries Limited, and in 1926 became a founding member of Imperial Chemical Industries. The Home Office removed gunpowder from its list of Permitted Explosives. Shortly afterwards, on 31 December 1931, the former Curtis & Harvey's Glynneath gunpowder factory at Pontneddfechan in Wales closed down. The factory was demolished by fire in 1932. The last remaining gunpowder mill at the Royal Gunpowder Factory, Waltham Abbey was damaged by a German parachute mine in 1941 and it never reopened. This was followed by the closure and demolition of the gunpowder section at the Royal Ordnance Factory, ROF Chorley, at the end of World War II, and of ICI Nobel's Roslin gunpowder factory which closed in 1954. This left ICI Nobel's Ardeer site in Scotland, which included a gunpowder factory, as the only factory in Great Britain producing gunpowder. The gunpowder area of the Ardeer site closed in October 1976.
India
Gunpowder and gunpowder weapons were transmitted to India through the Mongol invasions of India. The Mongols were defeated by Alauddin Khalji of the Delhi Sultanate, and some of the Mongol soldiers remained in northern India after their conversion to Islam. It was written in the Tarikh-i Firishta (1606–1607) that Nasiruddin Mahmud the ruler of the Delhi Sultanate presented the envoy of the Mongol ruler Hulegu Khan with a dazzling pyrotechnics display upon his arrival in Delhi in 1258. Nasiruddin Mahmud tried to express his strength as a ruler and tried to ward off any Mongol attempt similar to the Siege of Baghdad (1258). Firearms known as top-o-tufak also existed in many Muslim kingdoms in India by as early as 1366. From then on the employment of gunpowder warfare in India was prevalent, with events such as the "Siege of Belgaum" in 1473 by Sultan Muhammad Shah Bahmani.
The shipwrecked Ottoman Admiral Seydi Ali Reis is known to have introduced the earliest type of matchlock weapons, which the Ottomans used against the Portuguese during the Siege of Diu (1531). After that, a diverse variety of firearms, large guns in particular, became visible in Tanjore, Dacca, Bijapur, and Murshidabad. Guns made of bronze were recovered from Calicut (1504)- the former capital of the Zamorins
The Mughal emperor Akbar mass-produced matchlocks for the Mughal Army. Akbar is personally known to have shot a leading Rajput commander during the Siege of Chittorgarh. The Mughals began to use bamboo rockets (mainly for signalling) and employ sappers: special units that undermined heavy stone fortifications to plant gunpowder charges.
The Mughal Emperor Shah Jahan is known to have introduced much more advanced matchlocks, their designs were a combination of Ottoman and Mughal designs. Shah Jahan also countered the British and other Europeans in his province of Gujarāt, which supplied Europe saltpeter for use in gunpowder warfare during the 17th century. Bengal and Mālwa participated in saltpeter production. The Dutch, French, Portuguese, and English used Chhapra as a center of saltpeter refining.
Ever since the founding of the Sultanate of Mysore by Hyder Ali, French military officers were employed to train the Mysore Army. Hyder Ali and his son Tipu Sultan were the first to introduce modern cannons and muskets, their army was also the first in India to have official uniforms. During the Second Anglo-Mysore War Hyder Ali and his son Tipu Sultan unleashed the Mysorean rockets at their British opponents effectively defeating them on various occasions. The Mysorean rockets inspired the development of the Congreve rocket, which the British widely used during the Napoleonic Wars and the War of 1812.
Southeast Asia
Cannons were introduced to Majapahit when Kublai Khan's Chinese army under the leadership of Ike Mese sought to invade Java in 1293. History of Yuan mentioned that the Mongol used cannons (Chinese: 炮—Pào) against Daha forces. Cannons were used by the Ayutthaya Kingdom in 1352 during its invasion of the Khmer Empire. Within a decade large quantities of gunpowder could be found in the Khmer Empire. By the end of the century firearms were also used by the Trần dynasty.
Even though the knowledge of making gunpowder-based weapons was known after the failed Mongol invasion of Java, and the predecessor of firearms, the pole gun (bedil tombak), is recorded as being used by Java in 1413, the knowledge of making "true" firearms came much later, after the middle of the 15th century. It was brought by the Islamic nations of West Asia, most probably the Arabs. The precise year of introduction is unknown, but it may be safely concluded to be no earlier than 1460. Before the arrival of the Portuguese in Southeast Asia, the natives already possessed primitive firearms, the Java arquebus. Portuguese influence to local weaponry after the capture of Malacca (1511) resulted in a new type of hybrid tradition matchlock firearm, the istinggar.
When the Portuguese came to the archipelago, they referred to the breech-loading swivel gun as berço, while the Spaniards call it verso. By the early 16th century, the Javanese already locally producing large guns, some of them still survived until the present day and dubbed as "sacred cannon" or "holy cannon". These cannons varied between 180- and 260-pounders, weighing anywhere between 3 and 8 tons, length of them between 3 and 6 m.
Saltpeter harvesting was recorded by Dutch and German travelers as being common in even the smallest villages and was collected from the decomposition process of large dung hills specifically piled for the purpose. The Dutch punishment for possession of non-permitted gunpowder appears to have been amputation. Ownership and manufacture of gunpowder was later prohibited by the colonial Dutch occupiers. According to colonel McKenzie quoted in Sir Thomas Stamford Raffles', The History of Java (1817), the purest sulfur was supplied from a crater from a mountain near the straits of Bali.
Historiography
On the origins of gunpowder technology, historian Tonio Andrade remarked, "Scholars today overwhelmingly concur that the gun was invented in China." Gunpowder and the gun are widely believed by historians to have originated from China due to the large body of evidence that documents the evolution of gunpowder from a medicine to an incendiary and explosive, and the evolution of the gun from the fire lance to a metal gun, whereas similar records do not exist elsewhere. As Andrade explains, the large amount of variation in gunpowder recipes in China relative to Europe is "evidence of experimentation in China, where gunpowder was at first used as an incendiary and only later became an explosive and a propellant... in contrast, formulas in Europe diverged only very slightly from the ideal proportions for use as an explosive and a propellant, suggesting that gunpowder was introduced as a mature technology."
However, the history of gunpowder is not without controversy. A major problem confronting the study of early gunpowder history is ready access to sources close to the events described. Often the first records potentially describing use of gunpowder in warfare were written several centuries after the fact, and may well have been colored by the contemporary experiences of the chronicler. Translation difficulties have led to errors or loose interpretations bordering on artistic licence. Ambiguous language can make it difficult to distinguish gunpowder weapons from similar technologies that do not rely on gunpowder. A commonly cited example is a report of the Battle of Mohi in Eastern Europe that mentions a "long lance" sending forth "evil-smelling vapors and smoke", which has been variously interpreted by different historians as the "first-gas attack upon European soil" using gunpowder, "the first use of cannon in Europe", or merely a "toxic gas" with no evidence of gunpowder. It is difficult to accurately translate original Chinese alchemical texts, which tend to explain phenomena through metaphor, into modern scientific language with rigidly defined terminology in English. Early texts potentially mentioning gunpowder are sometimes marked by a linguistic process where semantic change occurred. For instance, the Arabic word naft transitioned from denoting naphtha to denoting gunpowder, and the Chinese word pào changed in meaning from trebuchet to a cannon. This has led to arguments on the exact origins of gunpowder based on etymological foundations. Science and technology historian Bert S. Hall makes the observation that, "It goes without saying, however, that historians bent on special pleading, or simply with axes of their own to grind, can find rich material in these terminological thickets."
Another major area of contention in modern studies of the history of gunpowder is regarding the transmission of gunpowder. While the literary and archaeological evidence supports a Chinese origin for gunpowder and guns, the manner in which gunpowder technology was transferred from China to the West is still under debate. It is unknown why the rapid spread of gunpowder technology across Eurasia took place over several decades whereas other technologies such as paper, the compass, and printing did not reach Europe until centuries after they were invented in China.
Components
Gunpowder is a granular mixture of:
a nitrate, typically potassium nitrate (KNO3), which supplies oxygen for the reaction;
charcoal, which provides carbon and other fuel for the reaction, simplified as carbon (C);
sulfur (S), which, while also serving as a fuel, lowers the temperature required to ignite the mixture, thereby increasing the rate of combustion.
Potassium nitrate is the most important ingredient in terms of both bulk and function because the combustion process releases oxygen from the potassium nitrate, promoting the rapid burning of the other ingredients. To reduce the likelihood of accidental ignition by static electricity, the granules of modern gunpowder are typically coated with graphite, which prevents the build-up of electrostatic charge.
Charcoal does not consist of pure carbon; rather, it consists of partially pyrolyzed cellulose, in which the wood is not completely decomposed. Carbon differs from ordinary charcoal. Whereas charcoal's autoignition temperature is relatively low, carbon's is much greater. Thus, a gunpowder composition containing pure carbon would burn similarly to a match head, at best.
The current standard composition for the gunpowder manufactured by pyrotechnicians was adopted as long ago as 1780. Proportions by weight are 75% potassium nitrate (known as saltpeter or saltpetre), 15% softwood charcoal, and 10% sulfur. These ratios have varied over the centuries and by country, and can be altered somewhat depending on the purpose of the powder. For instance, power grades of black powder, unsuitable for use in firearms but adequate for blasting rock in quarrying operations, are called blasting powder rather than gunpowder with standard proportions of 70% nitrate, 14% charcoal, and 16% sulfur; blasting powder may be made with the cheaper sodium nitrate substituted for potassium nitrate and proportions may be as low as 40% nitrate, 30% charcoal, and 30% sulfur. In 1857, Lammot du Pont solved the main problem of using cheaper sodium nitrate formulations when he patented DuPont "B" blasting powder. After manufacturing grains from press-cake in the usual way, his process tumbled the powder with graphite dust for 12 hours. This formed a graphite coating on each grain that reduced its ability to absorb moisture.
Neither the use of graphite nor sodium nitrate was new. Glossing gunpowder corns with graphite was already an accepted technique in 1839, and sodium nitrate-based blasting powder had been made in Peru for many years using the sodium nitrate mined at Tarapacá (now in Chile). Also, in 1846, two plants were built in south-west England to make blasting powder using this sodium nitrate. The idea may well have been brought from Peru by Cornish miners returning home after completing their contracts. Another suggestion is that it was William Lobb, the plant collector, who recognised the possibilities of sodium nitrate during his travels in South America. Lammot du Pont would have known about the use of graphite and probably also knew about the plants in south-west England. In his patent he was careful to state that his claim was for the combination of graphite with sodium nitrate-based powder, rather than for either of the two individual technologies.
French war powder in 1879 used the ratio 75% saltpeter, 12.5% charcoal, 12.5% sulfur. English war powder in 1879 used the ratio 75% saltpeter, 15% charcoal, 10% sulfur. The British Congreve rockets used 62.4% saltpeter, 23.2% charcoal and 14.4% sulfur, but the British Mark VII gunpowder was changed to 65% saltpeter, 20% charcoal and 15% sulfur. The explanation for the wide variety in formulation relates to usage. Powder used for rocketry can use a slower burn rate since it accelerates the projectile for a much longer time—whereas powders for weapons such as flintlocks, cap-locks, or matchlocks need a higher burn rate to accelerate the projectile in a much shorter distance. Cannons usually used lower burn-rate powders, because most would burst with higher burn-rate powders.
Other compositions
Besides black powder, there are other historically important types of gunpowder. "Brown gunpowder" is cited as composed of 79% nitre, 3% sulfur, and 18% charcoal per 100 of dry powder, with about 2% moisture. Prismatic Brown Powder is a large-grained product the Rottweil Company introduced in 1884 in Germany, which was adopted by the British Royal Navy shortly thereafter. The French navy adopted a fine, 3.1 millimeter, not prismatic grained product called Slow Burning Cocoa (SBC) or "cocoa powder". These brown powders reduced burning rate even further by using as little as 2 percent sulfur and using charcoal made from rye straw that had not been completely charred, hence the brown color.
Lesmok powder was a product developed by DuPont in 1911, one of several semi-smokeless products in the industry containing a mixture of black and nitrocellulose powder. It was sold to Winchester and others primarily for .22 and .32 small calibers. Its advantage was that it was believed at the time to be less corrosive than smokeless powders then in use. It was not understood in the U.S. until the 1920s that the actual source of corrosion was the potassium chloride residue from potassium chlorate sensitized primers. The bulkier black powder fouling better disperses primer residue. Failure to mitigate primer corrosion by dispersion caused the false impression that nitrocellulose-based powder caused corrosion. Lesmok had some of the bulk of black powder for dispersing primer residue, but somewhat less total bulk than straight black powder, thus requiring less frequent bore cleaning. It was last sold by Winchester in 1947.
Sulfur-free powders
The development of smokeless powders, such as cordite, in the late 19th century created the need for a spark-sensitive priming charge, such as gunpowder. However, the sulfur content of traditional gunpowders caused corrosion problems with Cordite Mk I and this led to the introduction of a range of sulfur-free gunpowders, of varying grain sizes. They typically contain 70.5 parts of saltpeter and 29.5 parts of charcoal. Like black powder, they were produced in different grain sizes. In the United Kingdom, the finest grain was known as sulfur-free mealed powder (SMP). Coarser grains were numbered as sulfur-free gunpowder (SFG n): 'SFG 12', 'SFG 20', 'SFG 40' and 'SFG 90', for example where the number represents the smallest BSS sieve mesh size, which retained no grains.
Sulfur's main role in gunpowder is to decrease the ignition temperature. A sample reaction for sulfur-free gunpowder would be:
6 KNO3 + C7H4O -> 3 K2CO3 + 4 CO2 + 2 H2O + 3 N2
Smokeless powders
The term black powder was coined in the late 19th century, primarily in the United States, to distinguish prior gunpowder formulations from the new smokeless powders and semi-smokeless powders. Semi-smokeless powders featured bulk volume properties that approximated black powder, but had significantly reduced amounts of smoke and combustion products. Smokeless powder has different burning properties (pressure vs. time) and can generate higher pressures and work per gram. This can rupture older weapons designed for black powder. Smokeless powders ranged in color from brownish tan to yellow to white. Most of the bulk semi-smokeless powders ceased to be manufactured in the 1920s.
Granularity
Serpentine
The original dry-compounded powder used in 15th-century Europe was known as "Serpentine", either a reference to Satan or to a common artillery piece that used it. The ingredients were ground
together with a mortar and pestle, perhaps for 24 hours, resulting in a fine flour. Vibration during transportation could cause the components to separate again, requiring remixing in the field. Also if the quality of the saltpeter was low (for instance if it was contaminated with highly hygroscopic calcium nitrate), or if the powder was simply old (due to the mildly hygroscopic nature of potassium nitrate), in humid weather it would need to be re-dried. The dust from "repairing" powder in the field was a major hazard.
Loading cannons or bombards before the powder-making advances of the Renaissance was a skilled art. Fine powder loaded haphazardly or too tightly would burn incompletely or too slowly. Typically, the breech-loading powder chamber in the rear of the piece was filled only about half full, the serpentine powder neither too compressed nor too loose, a wooden bung pounded in to seal the chamber from the barrel when assembled, and the projectile placed on. A carefully determined empty space was necessary for the charge to burn effectively. When the cannon was fired through the touchhole, turbulence from the initial surface combustion caused the rest of the powder to be rapidly exposed to the flame.
The advent of much more powerful and easy to use corned powder changed this procedure, but serpentine was used with older guns into the 17th century.
Corning
For propellants to oxidize and burn rapidly and effectively, the combustible ingredients must be reduced to the smallest possible particle sizes, and be as thoroughly mixed as possible. Once mixed, however, for better results in a gun, makers discovered that the final product should be in the form of individual dense grains that spread the fire quickly from grain to grain, much as straw or twigs catch fire more quickly than a pile of sawdust.
In late 14th century Europe and China, gunpowder was improved by wet grinding; liquid such as distilled spirits were added during the grinding-together of the ingredients and the moist paste dried afterwards. The principle of wet mixing to prevent the separation of dry ingredients, invented for gunpowder, is used today in the pharmaceutical industry. It was discovered that if the paste was rolled into balls before drying the resulting gunpowder absorbed less water from the air during storage and traveled better. The balls were then crushed in a mortar by the gunner immediately before use, with the old problem of uneven particle size and packing causing unpredictable results. If the right size particles were chosen, however, the result was a great improvement in power. Forming the damp paste into corn-sized clumps by hand or with the use of a sieve instead of larger balls produced a product after drying that loaded much better, as each tiny piece provided its own surrounding air space that allowed much more rapid combustion than a fine powder. This "corned" gunpowder was from 30% to 300% more powerful. An example is cited where of serpentine was needed to shoot a ball, but only of corned powder.
Because the dry powdered ingredients must be mixed and bonded together for extrusion and cut into grains to maintain the blend, size reduction and mixing is done while the ingredients are damp, usually with water. After 1800, instead of forming grains by hand or with sieves, the damp mill-cake was pressed in molds to increase its density and extract the liquid, forming press-cake. The pressing took varying amounts of time, depending on conditions such as atmospheric humidity. The hard, dense product was broken again into tiny pieces, which were separated with sieves to produce a uniform product for each purpose: coarse powders for cannons, finer grained powders for muskets, and the finest for small hand guns and priming. Inappropriately fine-grained powder often caused cannons to burst before the projectile could move down the barrel, due to the high initial spike in pressure. Mammoth powder with large grains, made for Rodman's 15-inch cannon, reduced the pressure to only 20 percent as high as ordinary cannon powder would have produced.
In the mid-19th century, measurements were made determining that the burning rate within a grain of black powder (or a tightly packed mass) is about 6 cm/s (0.20 feet/s), while the rate of ignition propagation from grain to grain is around 9 m/s (30 feet/s), over two orders of magnitude faster.
Modern types
Modern corning first compresses the fine black powder meal into blocks with a fixed density (1.7 g/cm3). In the United States, gunpowder grains were designated F (for fine) or C (for coarse). Grain diameter decreased with a larger number of Fs and increased with a larger number of Cs, ranging from about for 7F to for 7C. Even larger grains were produced for artillery bore diameters greater than about . The standard DuPont Mammoth powder developed by Thomas Rodman and Lammot du Pont for use during the American Civil War had grains averaging in diameter with edges rounded in a glazing barrel. Other versions had grains the size of golf and tennis balls for use in Rodman guns. In 1875 DuPont introduced Hexagonal powder for large artillery, which was pressed using shaped plates with a small center core—about diameter, like a wagon wheel nut, the center hole widened as the grain burned. By 1882 German makers also produced hexagonal grained powders of a similar size for artillery.
By the late 19th century manufacturing focused on standard grades of black powder from Fg used in large bore rifles and shotguns, through FFg (medium and small-bore arms such as muskets and fusils), FFFg (small-bore rifles and pistols), and FFFFg (extreme small bore, short pistols and most commonly for priming flintlocks). A coarser grade for use in military artillery blanks was designated A-1. These grades were sorted on a system of screens with oversize retained on a mesh of 6 wires per inch, A-1 retained on 10 wires per inch, Fg retained on 14, FFg on 24, FFFg on 46, and FFFFg on 60. Fines designated FFFFFg were usually reprocessed to minimize explosive dust hazards. In the United Kingdom, the main service gunpowders were classified RFG (rifle grained fine) with diameter of one or two millimeters and RLG (rifle grained large) for grain diameters between two and six millimeters. Gunpowder grains can alternatively be categorized by mesh size: the BSS sieve mesh size, being the smallest mesh size, which retains no grains. Recognized grain sizes are Gunpowder G 7, G 20, G 40, and G 90.
Owing to the large market of antique and replica black-powder firearms in the US, modern black powder substitutes like Pyrodex, Triple Seven and Black Mag3 pellets have been developed since the 1970s. These products, which should not be confused with smokeless powders, aim to produce less fouling (solid residue), while maintaining the traditional volumetric measurement system for charges. Claims of less corrosiveness of these products have been controversial however. New cleaning products for black-powder guns have also been developed for this market.
Chemistry
A simple, commonly cited, chemical equation for the combustion of gunpowder is:
2 KNO3 + S + 3 C → K2S + N2 + 3 CO2.
A balanced, but still simplified, equation is:
10 KNO3 + 3 S + 8 C → 2 K2CO3 + 3 K2SO4 + 6 CO2 + 5 N2.
The exact percentages of ingredients varied greatly through the medieval period as the recipes were developed by trial and error, and needed to be updated for changing military technology.
Gunpowder does not burn as a single reaction, so the byproducts are not easily predicted. One study showed that it produced (in order of descending quantities) 55.91% solid products: potassium carbonate, potassium sulfate, potassium sulfide, sulfur, potassium nitrate, potassium thiocyanate, carbon, ammonium carbonate and 42.98% gaseous products: carbon dioxide, nitrogen, carbon monoxide, hydrogen sulfide, hydrogen, methane, 1.11% water.
Gunpowder made with less-expensive and more plentiful sodium nitrate instead of potassium nitrate (in appropriate proportions) works just as well. Gunpowder releases 3 megajoules per kilogram and contains its own oxidant. This is less than TNT (4.7 megajoules per kilogram), or gasoline (47.2 megajoules per kilogram in combustion, but gasoline requires an oxidant; for instance, an optimized gasoline and O2 mixture releases 10.4 megajoules per kilogram, taking into account the mass of the oxygen).
Gunpowder also has a low energy density compared to modern "smokeless" powders, and thus to achieve high energy loadings, large amounts are needed with heavy projectiles.
Production
For the most powerful black powder, meal powder, a wood charcoal is used. The best wood for the purpose is Pacific willow, but others such as alder or buckthorn can be used. In Great Britain between the 15th and 19th centuries charcoal from alder buckthorn was greatly prized for gunpowder manufacture; cottonwood was used by the American Confederate States. The ingredients are reduced in particle size and mixed as intimately as possible. Originally, this was with a mortar-and-pestle or a similarly operating stamping-mill, using copper, bronze or other non-sparking materials, until supplanted by the rotating ball mill principle with non-sparking bronze or lead. Historically, a marble or limestone edge runner mill, running on a limestone bed, was used in Great Britain; however, by the mid 19th century this had changed to either an iron-shod stone wheel or a cast iron wheel running on an iron bed. The mix was dampened with alcohol or water during grinding to prevent accidental ignition. This also helps the extremely soluble saltpeter to mix into the microscopic pores of the very high surface-area charcoal.
Around the late 14th century, European powdermakers first began adding liquid during grinding to improve mixing, reduce dust, and with it the risk of explosion. The powder-makers would then shape the resulting paste of dampened gunpowder, known as mill cake, into corns, or grains, to dry. Not only did corned powder keep better because of its reduced surface area, gunners also found that it was more powerful and easier to load into guns. Before long, powder-makers standardized the process by forcing mill cake through sieves instead of corning powder by hand.
The improvement was based on reducing the surface area of a higher density composition. At the beginning of the 19th century, makers increased density further by static pressing. They shoveled damp mill cake into a two-foot square box, placed this beneath a screw press and reduced it to half its volume. "Press cake" had the hardness of slate. They broke the dried slabs with hammers or rollers, and sorted the granules with sieves into different grades. In the United States, Eleuthere Irenee du Pont, who had learned the trade from Lavoisier, tumbled the dried grains in rotating barrels to round the edges and increase durability during shipping and handling. (Sharp grains rounded off in transport, producing fine "meal dust" that changed the burning properties.)
Another advance was the manufacture of kiln charcoal by distilling wood in heated iron retorts instead of burning it in earthen pits. Controlling the temperature influenced the power and consistency of the finished gunpowder. In 1863, in response to high prices for Indian saltpeter, DuPont chemists developed a process using potash or mined potassium chloride to convert plentiful Chilean sodium nitrate to potassium nitrate.
The following year (1864) the Gatebeck Low Gunpowder Works in Cumbria (Great Britain) started a plant to manufacture potassium nitrate by essentially the same chemical process. This is nowadays called the 'Wakefield Process', after the owners of the company. It would have used potassium chloride from the Staßfurt mines, near Magdeburg, Germany, which had recently become available in industrial quantities.
During the 18th century, gunpowder factories became increasingly dependent on mechanical energy. Despite mechanization, production difficulties related to humidity control, especially during the pressing, were still present in the late 19th century. A paper from 1885 laments that "Gunpowder is such a nervous and sensitive spirit, that in almost every process of manufacture it changes under our hands as the weather changes." Pressing times to the desired density could vary by a factor of three depending on the atmospheric humidity.
Legal status
The United Nations Model Regulations on the Transportation of Dangerous Goods and national transportation authorities, such as United States Department of Transportation, have classified gunpowder (black powder) as a Group A: Primary explosive substance for shipment because it ignites so easily. Complete manufactured devices containing black powder are usually classified as Group D: Secondary detonating substance, or black powder, or article containing secondary detonating substance, such as firework, class D model rocket engine, etc., for shipment because they are harder to ignite than loose powder. As explosives, they all fall into the category of Class 1.
Other uses
Besides its use as a propellant in firearms and artillery, black powder's other main use has been as a blasting powder in quarrying, mining, and road construction (including railroad construction). During the 19th century, outside of war emergencies such as the Crimean War or the American Civil War, more black powder was used in these industrial uses than in firearms and artillery. Dynamite gradually replaced it for those uses. Today, industrial explosives for such uses are still a huge market, but most of the market is in newer explosives rather than black powder.
Beginning in the 1930s, gunpowder or smokeless powder was used in rivet guns, stun guns for animals, cable splicers and other industrial construction tools. The "stud gun", a powder-actuated tool, drove nails or screws into solid concrete, a function not possible with hydraulic tools, and today is still an important part of various industries, but the cartridges usually use smokeless powders. Industrial shotguns have been used to eliminate persistent material rings in operating rotary kilns (such as those for cement, lime, phosphate, etc.) and clinker in operating furnaces, and commercial tools make the method more reliable.
Gunpowder has occasionally been employed for other purposes besides weapons, mining, fireworks and construction:
After the Battle of Aspern-Essling (1809), Dominique-Jean Larrey, the surgeon of the Napoleonic Army, lacking salt, seasoned a horse meat bouillon for the wounded under his care with gunpowder. It was also used for sterilization in ships when there was no alcohol.
British sailors used gunpowder to create tattoos when ink wasn't available, by pricking the skin and rubbing the powder into the wound in a method known as traumatic tattooing.
Christiaan Huygens experimented with gunpowder in 1673 in an early attempt to build an gunpowder engine, but he did not succeed. Modern attempts to recreate his invention were similarly unsuccessful.
Near London in 1853, Captain Shrapnel demonstrated a mineral processing use of black powder in a method for crushing gold-bearing ores by firing them from a cannon into an iron chamber, and "much satisfaction was expressed by all present". He hoped it would be useful on the goldfields of California and Australia. Nothing came of the invention, as continuously operating crushing machines that achieved more reliable comminution were already coming into use.
Starting in 1967, Los Angeles-based artist Ed Ruscha began using gunpowder as an artistic medium for a series of works on paper.
Gunpowder had originally been produced for medicinal purposes. It was eaten, in hopes of curing digestive ailments; inhaled, for respiratory disorders; and, as mentioned, rubbed onto skin level disorders like rashes or burns.
| Technology | Energy | null |
12772 | https://en.wikipedia.org/wiki/Gas%20mask | Gas mask | A gas mask is a piece of personal protective equipment used to protect the wearer from inhaling airborne pollutants and toxic gases. The mask forms a sealed cover over the nose and mouth, but may also cover the eyes and other vulnerable soft tissues of the face. Most gas masks are also respirators, though the word gas mask is often used to refer to military equipment (such as a field protective mask), the scope used in this article. Gas masks only protect the user from ingesting or inhaling chemical agents, as well as preventing contact with the user's eyes (many chemical agents affect through eye contact). Most combined gas mask filters will last around 8 hours in a biological or chemical situation. Filters against specific chemical agents can last up to 20 hours.
Airborne toxic materials may be gaseous (for example, chlorine or mustard gas), or particulates (such as biological agents). Many filters provide protection from both types.
The first gas masks mostly used circular lenses made of glass, mica or cellulose acetate to allow vision. Glass and mica were quite brittle and needed frequent replacement. The later Triplex lens style (a cellulose acetate lens sandwiched between glass ones) became more popular, and alongside plain cellulose acetate they became the standard into the 1930s. Panoramic lenses were not popular until the 1930s, but there are some examples of those being used even during the war (Austro-Hungarian 15M). Later, stronger polycarbonate came into use.
Some masks have one or two compact air filter containers screwed onto inlets, while others have a large air filtration container connected to the gas mask via a hose that is sometimes confused with an air-supplied respirator in which an alternate supply of fresh air (oxygen tanks) is delivered.
History and development
Early breathing devices
According to Popular Mechanics, "The common sponge was used in ancient Greece as a gas mask..." In 1785, Jean-François Pilâtre de Rozier invented a respirator.
Primitive respirator examples were used by miners and introduced by Alexander von Humboldt in 1799, when he worked as a mining engineer in Prussia. The forerunner to the modern gas mask was invented in 1847 by Lewis P. Haslett, a device that contained elements that allowed breathing through a nose and mouthpiece, inhalation of air through a bulb-shaped filter, and a vent to exhale air back into the atmosphere. First Facts states that a "gas mask resembling the modern type" was patented by Lewis Phectic Haslett of Louisville, Kentucky, who received a patent on June 12, 1849. U.S. patent #6,529 issued to Haslett, described the first "Inhaler or Lung Protector" that filtered dust from the air.
Early versions were constructed by the Scottish chemist John Stenhouse in 1854 and the physicist John Tyndall in the 1870s. Another early design was the "Safety Hood and Smoke Protector" invented by Garrett Morgan in 1912, and patented in 1914. It was a simple device consisting of a cotton hood with two hoses which hung down to the floor, allowing the wearer to breathe the safer air found there. In addition, moist sponges were inserted at the end of the hoses in order to better filter the air.
World War I
The First World War brought about the first need for mass-produced gas masks on both sides because of extensive use of chemical weapons. The German army successfully used poison gas for the first time against Allied troops at the Second Battle of Ypres, Belgium on April 22, 1915. An immediate response was cotton wool wrapped in muslin, issued to the troops by May 1. This was followed by the Black Veil Respirator, invented by John Scott Haldane, which was a cotton pad soaked in an absorbent solution which was secured over the mouth using black cotton veiling.
Seeking to improve on the Black Veil respirator, Cluny Macpherson created a mask made of chemical-absorbing fabric which fitted over the entire head: a canvas hood treated with chlorine-absorbing chemicals, and fitted with a transparent mica eyepiece. Macpherson presented his idea to the British War Office Anti-Gas Department on May 10, 1915; prototypes were developed soon after. The design was adopted by the British Army and introduced as the British Smoke Hood in June 1915; Macpherson was appointed to the War Office Committee for Protection against Poisonous Gases. More elaborate sorbent compounds were added later to further iterations of his helmet (PH helmet), to defeat other respiratory poison gases used such as phosgene, diphosgene and chloropicrin. In summer and autumn 1915, Edward Harrison, Bertram Lambert and John Sadd developed the Large Box Respirator. This canister gas mask had a tin can containing the absorbent materials by a hose and began to be issued in February 1916. A compact version, the Small Box Respirator, was made a universal issue from August 1916.
In the first gas masks of World War I, it was initially found that wood charcoal was a good absorbent of poison gases. Around 1918, it was found that charcoals made from the shells and seeds of various fruits and nuts such as coconuts, chestnuts, horse-chestnuts, and peach stones performed much better than wood charcoal. These waste materials were collected from the public in recycling programs to assist the war effort.
The first effective filtering activated charcoal gas mask in the world was invented in 1915 by Russian chemist Nikolay Zelinsky.
Also in World War I, since dogs were frequently used on the front lines, a special type of gas mask was developed that dogs were trained to wear. Other gas masks were developed during World War I and the time following for horses in the various mounted units that operated near the front lines. In America, thousands of gas masks were produced for American as well as Allied troops. Mine Safety Appliances was a chief producer. This mask was later used widely in industry.
World War II
The British Respirator, Anti-Gas (Light) was developed in 1943 by the British. It was made of plastic and rubber-like material that greatly reduced the weight and bulk compared to World War I gas masks, and fitted the user's face more snugly and comfortably. The main improvement was replacing the separate filter canister connected with a hose by an easily replaceable filter canister screwed on the side of the gas mask. Also, it had replaceable plastic lenses.
Modern mask
Gas mask development since has mirrored the development of chemical agents in warfare, filling the need to protect against ever more deadly threats, biological weapons, and radioactive dust in the nuclear era. However, for agents that cause harm through contact or penetration of the skin, such as blister agent or nerve agent, a gas mask alone is not sufficient protection, and full protective clothing must be worn in addition to protect from contact with the atmosphere. For reasons of civil defence and personal protection, individuals often buy gas masks since they believe that they protect against the harmful effects of an attack with nuclear, biological, or chemical (NBC) agents, which is only partially true, as gas masks protect only against respiratory absorption. Most military gas masks are designed to be capable of protecting against all NBC agents, but they can have filter canisters proof against those agents (heavier) or only against riot control agents and smoke (lighter and often used for training purposes). There are lightweight masks solely for protection against riot-control agents and not for NBC situations.
Although thorough training and the availability of gas masks and other protective equipment can nullify the casualty-causing effects of an attack by chemical agents, troops who are forced to operate in full protective gear are less efficient in completing tasks, tire easily, and may be affected psychologically by the threat of attack by those weapons. During the Cold War, it was seen as inevitable that there would be a constant NBC threat on the battlefield and so troops needed protection in which they could remain fully functional; thus, protective gear and especially gas masks have evolved to incorporate innovations in terms of increasing user comfort and compatibility with other equipment (from drinking devices to artificial respiration tubes, to communications systems etc.).
During the Iran–Iraq War (1980–88), Iraq developed its chemical weapons program with the help of European countries such as Germany and France and used them in a large scale against Iranians and Iraqi Kurds. Iran was unprepared for chemical warfare. In 1984, Iran received gas masks from the Republic of Korea and East Germany, but the Korean masks were not suited for the faces of non-East Asian people, the filter lasted for only 15 minutes, and the 5,000 masks bought from East Germany proved to be not gas masks but spray-painting goggles. As late as 1986, Iranian diplomats still travelled in Europe to buy active charcoal and models of filters to produce defensive gear domestically. In April 1988, Iran started domestic production of gas masks by the Iran Yasa factories.
Principles of construction
Absorption is the process of being drawn into a (usually larger) body or substrate, and adsorption is the process of deposition upon a surface. This can be used to remove both particulate and gaseous hazards. Although some form of reaction may take place, it is not necessary; the method may work by attractive charges. For example, if the target particles are positively charged, a negatively charged substrate may be used. Examples of substrates include activated carbon, and zeolites. This effect can be very simple and highly effective, for example using a damp cloth to cover the mouth and nose while escaping a fire. While this method can be effective at trapping particulates produced by combustion, it does not filter out harmful gases which may be toxic or which displace the oxygen required for survival.
Safety of old gas masks
Gas masks have a useful lifespan limited by the absorbent capacity of the filter. Filters cease to provide protection when saturated with hazardous chemicals, and degrade over time even if sealed. Most gas masks have sealing caps over the air intake and are stored in vacuum-sealed bags to prevent the filter from degrading due to exposure to humidity and pollutants in normal air. Unused gas mask filters from World War II may not protect the wearer at all, and could be harmful if worn due to long-term changes in the chemical composition of the filter.
Some World War II and Soviet Cold War gas mask filters contained chrysotile asbestos or crocidolite asbestos. not known to be harmful at the time. It is not reliably known for how long the materials were used in filters.
Typically, masks using 40 mm connections are a more recent design. Rubber degrades with time, so boxed unused "modern type" masks can be cracked and leak. The US C2 canister (black) contains hexavalent chromium; studies by the U.S. Army Chemical Corps found that the level in the filter was acceptable, but suggest caution when using, as it is a carcinogen.
Modern filter classification
The filter is selected according to the toxic compound. Each filter type protects against a particular hazard and is color-coded:
Particle filters are often included, because in many cases the hazardous materials are in the form of mist, which can be captured by the particle filter before entering the chemical adsorber. In Europe and jurisdictions with similar rules such as Russia and Australia, filter types are given suffix numbers to indicate their capacity. For non-particle hazards, the level "1" is assumed and a number "2" is used to indicate a better level. For particles (P), three levels are always given with the number. In the US, only the particle part is further classified by NIOSH air filtration ratings.
A filter type that can protect against multiple hazards is notated with the European symbols concatenated with each other. Examples include ABEK, ABEK-P3, and ABEK-HgP3. A2B2E2K2-P3 is the highest rating of filter available. An entirely different "multi/CBRN" filter class with an olive color is used in the US.
Filtration may be aided with an air pump to improve wearer comfort. Filtration of air is only possible if there is sufficient oxygen in the first place. Thus, when handling asphyxiants, or when ventilation is poor or the hazards are unknown, filtration is not possible and air must be supplied (with a SCBA system) from a pressurized bottle as in scuba diving.
Use
A modern mask typically is constructed of an elastic polymer in various sizes. It is fitted with various adjustable straps which may be tightened to secure a good fit. Crucially, it is connected to a filter cartridge near the mouth either directly, or via a flexible hose. Some models contain drinking tubes which may be connected to a water bottle. Corrective lens inserts are also available for users who require them.
Masks are typically tested for fit before use. After a mask is fitted, it is often tested by various challenge agents. Isoamyl acetate, a synthetic banana flavourant, and camphor are often used as innocuous challenge agents. In the military, teargases such as CN, CS, and stannic chloride in a chamber may be used to give the users confidence in the efficacy of the mask.
Shortcomings
The protection of a gas mask comes with some disadvantages. The wearer of a typical gas mask must exert extra effort to breathe, and some of the exhaled air is re-inhaled due to the dead space between the facepiece and the user's face. The exposure to carbon dioxide may exceed its OELs (0.5% by volume/9 grammes per cubic metre for an eight-hour shift; 1.4%/27 grammes per m3 for 15 minutes' exposure) by a factor of many times: for gas masks and elastomeric respirators, up to 2.6%); and in case of long-term use, headache, dermatitis and acne may appear. The UK HSE textbook recommends limiting the use of respirators without air supply (that is, not PAPR) to one hour.
Reaction and exchange
This principle relies on substances harmful to humans being usually more reactive than air. This method of separation will use some form of generally reactive substance (for example an acid) coating or supported by some solid material. An example is synthetic resins. These can be created with different groups of atoms (usually called functional groups) that have different properties. Thus a resin can be tailored to a particular toxic group. When the reactive substance comes in contact with the resin, it will bond to it, removing it from the air stream. It may also exchange with a less harmful substance at this site.
Though it was crude, the hypo helmet was a stopgap measure for British troops in the trenches that offered at least some protection during a gas attack. As the months passed and poison gas was used more often, more sophisticated gas masks were developed and introduced. There are two main difficulties with gas mask design:
The user may be exposed to many types of toxic material. Military personnel are especially prone to being exposed to a diverse range of toxic gases. However, if the mask is for a particular use (such as the protection from a specific toxic material in a factory), then the design can be much simpler and the cost lower.
The protection will wear off over time. Filters will clog up, substrates for absorption will fill up, and reactive filters will run out of reactive substances. Thus the user only has protection for a limited time, and then they must either replace the filter device in the mask, or use a new mask.
| Technology | Food, water and health | null |
12778 | https://en.wikipedia.org/wiki/Group%20velocity | Group velocity | The group velocity of a wave is the velocity with which the overall envelope shape of the wave's amplitudes—known as the modulation or envelope of the wave—propagates through space.
For example, if a stone is thrown into the middle of a very still pond, a circular pattern of waves with a quiescent center appears in the water, also known as a capillary wave. The expanding ring of waves is the wave group or wave packet, within which one can discern individual waves that travel faster than the group as a whole. The amplitudes of the individual waves grow as they emerge from the trailing edge of the group and diminish as they approach the leading edge of the group.
History
The idea of a group velocity distinct from a wave's phase velocity was first proposed by W.R. Hamilton in 1839, and the first full treatment was by Rayleigh in his "Theory of Sound" in 1877.
Definition and interpretation
The group velocity is defined by the equation:
where is the wave's angular frequency (usually expressed in radians per second), and is the angular wavenumber (usually expressed in radians per meter). The phase velocity is: .
The function , which gives as a function of , is known as the dispersion relation.
If is directly proportional to , then the group velocity is exactly equal to the phase velocity. A wave of any shape will travel undistorted at this velocity.
If ω is a linear function of k, but not directly proportional , then the group velocity and phase velocity are different. The envelope of a wave packet (see figure on right) will travel at the group velocity, while the individual peaks and troughs within the envelope will move at the phase velocity.
If is not a linear function of , the envelope of a wave packet will become distorted as it travels. Since a wave packet contains a range of different frequencies (and hence different values of ), the group velocity will be different for different values of . Therefore, the envelope does not move at a single velocity, but its wavenumber components () move at different velocities, distorting the envelope. If the wavepacket has a narrow range of frequencies, and is approximately linear over that narrow range, the pulse distortion will be small, in relation to the small nonlinearity. See further discussion below. For example, for deep water gravity waves, , and hence . This underlies the Kelvin wake pattern for the bow wave of all ships and swimming objects. Regardless of how fast they are moving, as long as their velocity is constant, on each side the wake forms an angle of 19.47° = arcsin(1/3) with the line of travel.
Derivation
One derivation of the formula for group velocity is as follows.
Consider a wave packet as a function of position and time .
Let be its Fourier transform at time ,
By the superposition principle, the wavepacket at any time is
where is implicitly a function of .
Assume that the wave packet is almost monochromatic, so that is sharply peaked around a central wavenumber .
Then, linearization gives
where
and
(see next section for discussion of this step). Then, after some algebra,
There are two factors in this expression. The first factor, , describes a perfect monochromatic wave with wavevector , with peaks and troughs moving at the phase velocity within the envelope of the wavepacket.
The other factor,
,
gives the envelope of the wavepacket. This envelope function depends on position and time only through the combination .
Therefore, the envelope of the wavepacket travels at velocity
which explains the group velocity formula.
Other expressions
For light, the refractive index , vacuum wavelength , and wavelength in the medium , are related by
with the phase velocity.
The group velocity, therefore, can be calculated by any of the following formulas,
Dispersion
Part of the previous derivation is the Taylor series approximation that:
If the wavepacket has a relatively large frequency spread, or if the dispersion has sharp variations (such as due to a resonance), or if the packet travels over very long distances, this assumption is not valid, and higher-order terms in the Taylor expansion become important.
As a result, the envelope of the wave packet not only moves, but also distorts, in a manner that can be described by the material's group velocity dispersion. Loosely speaking, different frequency-components of the wavepacket travel at different speeds, with the faster components moving towards the front of the wavepacket and the slower moving towards the back. Eventually, the wave packet gets stretched out. This is an important effect in the propagation of signals through optical fibers and in the design of high-power, short-pulse lasers.
Relation to phase velocity, refractive index and transmission speed
In three dimensions
For waves traveling through three dimensions, such as light waves, sound waves, and matter waves, the formulas for phase and group velocity are generalized in a straightforward way:
One dimension:
Three dimensions:
where means the gradient of the angular frequency as a function of the wave vector , and is the unit vector in direction k.
If the waves are propagating through an anisotropic (i.e., not rotationally symmetric) medium, for example a crystal, then the phase velocity vector and group velocity vector may point in different directions.
In lossy or gainful media
The group velocity is often thought of as the velocity at which energy or information is conveyed along a wave. In most cases this is accurate, and the group velocity can be thought of as the signal velocity of the waveform. However, if the wave is travelling through an absorptive or gainful medium, this does not always hold. In these cases the group velocity may not be a well-defined quantity, or may not be a meaningful quantity.
In his text "Wave Propagation in Periodic Structures", Brillouin argued that in a lossy medium the group velocity ceases to have a clear physical meaning. An example concerning the transmission of electromagnetic waves through an atomic gas is given by Loudon. Another example is mechanical waves in the solar photosphere: The waves are damped (by radiative heat flow from the peaks to the troughs), and related to that, the energy velocity is often substantially lower than the waves' group velocity.
Despite this ambiguity, a common way to extend the concept of group velocity to complex media is to consider spatially damped plane wave solutions inside the medium, which are characterized by a complex-valued wavevector. Then, the imaginary part of the wavevector is arbitrarily discarded and the usual formula for group velocity is applied to the real part of wavevector, i.e.,
Or, equivalently, in terms of the real part of complex refractive index, , one has
It can be shown that this generalization of group velocity continues to be related to the apparent speed of the peak of a wavepacket. The above definition is not universal, however: alternatively one may consider the time damping of standing waves (real , complex ), or, allow group velocity to be a complex-valued quantity. Different considerations yield distinct velocities, yet all definitions agree for the case of a lossless, gainless medium.
The above generalization of group velocity for complex media can behave strangely, and the example of anomalous dispersion serves as a good illustration.
At the edges of a region of anomalous dispersion, becomes infinite (surpassing even the speed of light in vacuum), and may easily become negative
(its sign opposes Re) inside the band of anomalous dispersion.
Superluminal group velocities
Since the 1980s, various experiments have verified that it is possible for the group velocity (as defined above) of laser light pulses sent through lossy materials, or gainful materials, to significantly exceed the speed of light in vacuum . The peaks of wavepackets were also seen to move faster than .
In all these cases, however, there is no possibility that signals could be carried faster than the speed of light in vacuum, since the high value of does not help to speed up the true motion of the sharp wavefront that would occur at the start of any real signal. Essentially the seemingly superluminal transmission is an artifact of the narrow band approximation used above to define group velocity and happens because of resonance phenomena in the intervening medium. In a wide band analysis it is seen that the apparently paradoxical speed of propagation of the signal envelope is actually the result of local interference of a wider band of frequencies over many cycles, all of which propagate perfectly causally and at phase velocity. The result is akin to the fact that shadows can travel faster than light, even if the light causing them always propagates at light speed; since the phenomenon being measured is only loosely connected with causality, it does not necessarily respect the rules of causal propagation, even if it under normal circumstances does so and leads to a common intuition.
| Physical sciences | Waves | Physics |
12786 | https://en.wikipedia.org/wiki/General%20anaesthetic | General anaesthetic | General anaesthetics (or anesthetics) are often defined as compounds that induce a loss of consciousness in humans or loss of righting reflex in animals. Clinical definitions are also extended to include an induced coma that causes lack of awareness to painful stimuli, sufficient to facilitate surgical applications in clinical and veterinary practice. General anaesthetics do not act as analgesics and should also not be confused with sedatives. General anaesthetics are a structurally diverse group of compounds whose mechanisms encompass multiple biological targets involved in the control of neuronal pathways. The precise workings are the subject of some debate and ongoing research.
General anesthetics elicit a state of general anesthesia. It remains somewhat controversial regarding how this state should be defined. General anesthetics, however, typically elicit several key reversible effects: immobility, analgesia, amnesia, unconsciousness, and reduced autonomic responsiveness to noxious stimuli.
Mode of administration
General anaesthetics can be administered either as gases or vapours (inhalational anaesthetics), or as injections (intravenous or even intramuscular). All of these agents share the property of being quite hydrophobic (i.e., as liquids, they are not freely miscible—or mixable—in water, and as gases they dissolve in oils better than in water). It is possible to deliver anaesthesia solely by inhalation or injection, but most commonly the two forms are combined, with an injection given to induce anaesthesia and a gas used to maintain it.
Inhalation
Inhalational anaesthetic substances are either volatile liquids or gases, and are usually delivered using an anaesthesia machine. An anaesthesia machine allows composing a mixture of oxygen, anaesthetics and ambient air, delivering it to the patient and monitoring patient and machine parameters. Liquid anaesthetics are vapourised in the machine.
Many compounds have been used for inhalation anaesthesia, but only a few are still in widespread use. Desflurane, isoflurane and sevoflurane are the most widely used volatile anaesthetics today. They are often combined with nitrous oxide. Older, less popular volatile anaesthetics include halothane, enflurane, and methoxyflurane. Researchers are also actively exploring the use of xenon as an anaesthetic.
Injection
Injectable anaesthetics are used for the induction and maintenance of a state of unconsciousness. Anaesthetists prefer to use intravenous injections, as they are faster, generally less painful and more reliable than intramuscular or subcutaneous injections. Among the most widely used drugs are:
Propofol
Etomidate
Barbiturates such as methohexital and thiopentone/thiopental
Benzodiazepines such as midazolam
Ketamine is used in the UK as "field anaesthesia", for instance in road traffic incidents or similar situations where an operation must be conducted at the scene or when there is not enough time to move to an operating room, while preferring other anaesthetics where conditions allow their use. It is more frequently used in the operative setting in the US.
Benzodiazepines are sedatives and are used in combinations with other general anaesthetics.
Mechanism of action
Induction and maintenance of general anesthesia, and the control of the various physiological side effects is typically achieved through a combinatorial drug approach. Individual general anesthetics vary with respect to their specific physiological and cognitive effects. While general anesthesia induction may be facilitated by one general anesthetic, others may be used in parallel or subsequently to achieve and maintain the desired anesthetic state. The drug approach utilized is dependent upon the procedure and the needs of the healthcare providers.
It is postulated that general anaesthetics exert their action by the activation of inhibitory central nervous system (CNS) receptors, and the inactivation of CNS excitatory receptors. The relative roles of different receptors is still under debate, but evidence exists for particular targets being involved with certain anaesthetics and drug effects.
Below are several key targets of general anesthetics that likely mediate their effects:
GABAA receptor agonists
GABAA receptors are chloride channels that hyperpolarize neurons and function as inhibitory CNS receptors. General anesthetics that agonize them are typically used to induce a state of sedation and/or unconsciousness. Such drugs include propofol, etomidate, isoflurane, benzodiazepines (midazolam, lorazepam, diazepam), and barbiturates (sodium thiopental, methohexital).
NMDA receptor antagonists
Ketamine, an NMDA receptor antagonist, is used primarily for its analgesic effects and in an off-label capacity for its anti-depressant effects. This drug, however, also alters arousal and is often used in parallel with other general anesthetics to help maintain a state of general anesthesia. Administration of ketamine alone leads to a dissociative state, in which a patient may experience auditory and visual hallucinations. Additionally, the perception of pain is dissociated from the perception of noxious stimuli. Ketamine appears to bind preferentially to the NMDA receptors on GABAergic interneurons, which may partially explain its effects.
Two-pore potassium channels (K2Ps) activation
Two-pore potassium channels (K2Ps) modulate the potassium conductance that contributes to the resting membrane potential in neurons. Opening of these channels therefore facilitates a hyperpolarizing current, which reduces neuronal excitability. K2Ps have been found to be affected by general anesthetics (esp. halogenated inhalation anesthetics) and are currently under investigation as potential targets. The K2P channel family comprises six subfamilies, which includes 15 unique members. 13 of these channels (excluding TWIK-1 and TWIK-2 homomers) are affected by general anesthetics. While it has not been determined that general anesthetics bind directly to these channels, nor is it clear how these drugs affect K2P conductance, electrophysiological studies have shown that certain general anesthetics result in K2P channel activation. This drug-elicited channel activation has been shown to be dependent upon specific amino-acids within certain K2P channels (i.e. TREK-1 and TASK channels). In the case of TREK-1, activation was shown through an anesthetic perturbation to membrane lipid clusters and activation of phospholipase D2; direct binding of anesthetics to purified reconstituted TREK-1 had no effect on conductance. The effects of certain general anesthetics are less pronounced in K2P knock-out mice, as compared to their wild-type counterparts. Cumulatively, TASK-1, TASK-3, and TREK-1 are particularly well supported as playing a role in the induction of general anesthesia.
Others
Opioid receptor agonists are primarily utilized for their analgesic effects. These drugs, however, can also elicit sedation. This effect is mediated by opioid actions on both opioid and acetylcholine receptors. While these drugs can lead to decreased arousal, they do not elicit a loss of consciousness. For this reason, they are often used in parallel with other general anesthetics to help maintain a state of general anesthesia. Such drugs include morphine, fentanyl, hydromorphone, and remifentanil.
Administration of the alpha2 adrenergic receptor agonist dexmedetomidine leads to sedation that resembles non-REM sleep. It is used in parallel with other general anesthetics to help maintain a state of general anesthesia, in an off-label capacity. Notably, patients are easily aroused from this non-REM sleep state.
Dopamine receptor antagonists have sedative and antiemetic properties. Previously, they were used in parallel with opioids to elicit neuroleptic anesthesia (catalepsy, analgesia, and unresponsiveness). They are no longer used in the context, because patients experiencing neuroleptic anesthesia were frequently aware of the medical procedures being performed, but could not move or express emotion. Such drugs include haloperidol and droperidol.
Stages of anesthesia
During administration of an anesthetic, the receiver goes through different stages of behavior ultimately leading to unconsciousness. This process is accelerated with intravenous anesthetics, so much so that it is negligible to consider during their use. The four stages of anesthesia are described using Guedel's signs, signifying the depth of anesthesia. These stages describe effects of anesthesia mainly on cognition, muscular activity, and respiration.
Stage I: Analgesia
The receiver of the anesthesia primarily feels analgesia followed by amnesia and a sense of confusion moving into the next stage.
Stage II: Excitement
Stage II is often characterized by the receiver being delirious and confused, with severe amnesia. Irregularities in the patterns of respiration are common at this stage of anesthesia. Nausea and vomiting are also indicators of Stage II anesthesia. Struggling and panic can sometimes occur as a result of delirium.
Stage III: Surgical Anesthesia
Normal breathing resumes at the beginnings of Stage III. Nearing the end of the stage, breathing ceases completely. Indicators for stage III anesthesia include loss of the eyelash reflex as well as regular breathing. Depth of stage III anesthesia can often be gauged by eye movement and pupil size.
Stage IV: Medullary Depression
No respiration occurs in stage IV. This is shortly followed by circulatory failure and depression of the vasomotor centers. Death is common at this stage of anesthesia if no breathing and circulatory support is available.
Physiological side effects
Aside from the clinically advantageous effects of general anesthetics, there are a number of other physiological consequences mediated by this class of drug. Notably, a reduction in blood pressure can be facilitated by a variety of mechanisms, including reduced cardiac contractility and dilation of the vasculature. This drop in blood pressure may activate a reflexive increase in heart rate, due to a baroreceptor-mediated feedback mechanism. Some anesthetics, however, disrupt this reflex.
Patients under general anesthesia are at greater risk of developing hypothermia, as the aforementioned vasodilation increases the heat lost via peripheral blood flow. By and large, these drugs reduce the internal body temperature threshold at which autonomic thermoregulatory mechanisms are triggered in response to cold. (On the other hand, the threshold at which thermoregulatory mechanisms are triggered in response to heat is typically increased.)
Anesthetics typically affect respiration. Inhalational anesthetics elicit bronchodilation, an increase in respiratory rate, and reduced tidal volume. The net effect is decreased respiration, which must be managed by healthcare providers, while the patient is under general anesthesia. The reflexes that function to alleviate airway obstructions are also dampened (e.g. gag and cough). Compounded with a reduction in lower esophageal sphincter tone, which increases the frequency of regurgitation, patients are especially prone to asphyxiation while under general anesthesia. Healthcare providers closely monitor individuals under general anesthesia and utilize a number of devices, such as an endotracheal tube, to ensure patient safety.
General anesthetics also affect the chemoreceptor trigger zone and brainstem vomiting center, eliciting nausea and vomiting following treatment.
Pharmacokinetics
Intravenous general anesthetics
Induction
Intravenously delivered general anesthetics are typically small and highly lipophilic molecules. These characteristics facilitate their rapid preferential distribution into the brain and spinal cord, which are both highly vascularized and lipophilic. It is here where the actions of these drugs lead to general anesthesia induction.
Elimination
Following distribution into the central nervous system (CNS), the anesthetic drug then diffuses out of the CNS into the muscles and viscera, followed by adipose tissues. In patients given a single injection of drug, this redistribution results in termination of general anesthesia. Therefore, following administration of a single anesthetic bolus, duration of drug effect is dependent solely upon the redistribution kinetics.
The half-life of an anesthetic drug following a prolonged infusion, however, depends upon both drug redistribution kinetics, drug metabolism in the liver, and existing drug concentration in fat. When large quantities of an anesthetic drug have already been dissolved in the body's fat stores, this can slow its redistribution out of the brain and spinal cord, prolonging its CNS effects. For this reason, the half-lives of these infused drugs are said to be context-dependent. Generally, prolonged anesthetic drug infusions result in longer drug half-lives, slowed elimination from the brain and spinal cord, and delayed termination of general anesthesia.
Inhalational general anesthetics
Minimal alveolar concentration (MAC) is the concentration of an inhalational anesthetic in the lungs that prevents 50% of patients from responding to surgical incision. This value is used to compare the potencies of various inhalational general anesthetics and impacts the partial-pressure of the drug utilized by healthcare providers during general anesthesia induction and/or maintenance.
Induction
Induction of anesthesia is facilitated by diffusion of an inhaled anesthetic drug into the brain and spinal cord. Diffusion throughout the body proceeds until the drug's partial pressure within the various tissues is equivalent to the partial pressure of the drug within the lungs. Healthcare providers can control the rate of anesthesia induction and final tissue concentrations of the anesthetic by varying the partial pressure of the inspired anesthetic. A higher drug partial pressure in the lungs will drive diffusion more rapidly throughout the body and yield a higher maximum tissue concentration. Respiratory rate and inspiratory volume will also affect the promptness of anesthesia onset, as will the extent of pulmonary blood flow.
The partition coefficient of a gaseous drug is indicative of its relative solubility in various tissues. This metric is the relative drug concentration between two tissues, when their partial pressures are equal (gas:blood, fat:blood, etc.). Inhalational anesthetics vary widely with respect to their tissue solubilities and partition coefficients. Anesthetics that are highly soluble require many molecules of drug to raise the partial pressure within a given tissue, as opposed to minimally soluble anesthetics which require relatively few. Generally, inhalational anesthetics that are minimally soluble reach equilibrium more quickly. Inhalational anesthetics that have a high fat:blood partition coefficient, however, reach equilibrium more slowly, due to the minimal vascularization of fat tissue, which serves as a large, slowly-filling reservoir for the drug.
Elimination
Inhaled anesthetics are eliminated via expiration, following diffusion into the lungs. This process is dependent largely upon the anesthetic blood:gas partition coefficient, tissue solubility, blood flow to the lungs, and patient respiratory rate and inspiratory volume. For gases that have minimal tissue solubility, termination of anesthesia generally occurs as rapidly as the onset of anesthesia. For gases that have high tissue solubility, however, termination of anesthesia is generally context-dependent. As with intravenous anesthetic infusions, prolonged delivery of highly soluble anesthetic gases generally results in longer drug half-lives, slowed elimination from the brain and spinal cord, and delayed termination of anesthesia.
Metabolism of inhaled anesthetics is generally not a major route of drug elimination.
History
Ethanol
While most research focuses on the intoxicating effects of ethanol, it can also produce a general anesthesia. Since antiquity, prior to the development of modern agents, alcohol was used as a general anaesthetic.
| Biology and health sciences | Anesthetics | Health |
12796 | https://en.wikipedia.org/wiki/Genotype | Genotype | The genotype of an organism is its complete set of genetic material. Genotype can also be used to refer to the alleles or variants an individual carries in a particular gene or genetic location. The number of alleles an individual can have in a specific gene depends on the number of copies of each chromosome found in that species, also referred to as ploidy. In diploid species like humans, two full sets of chromosomes are present, meaning each individual has two alleles for any given gene. If both alleles are the same, the genotype is referred to as homozygous. If the alleles are different, the genotype is referred to as heterozygous.
Genotype contributes to phenotype, the observable traits and characteristics in an individual or organism. The degree to which genotype affects phenotype depends on the trait. For example, the petal color in a pea plant is exclusively determined by genotype. The petals can be purple or white depending on the alleles present in the pea plant. However, other traits are only partially influenced by genotype. These traits are often called complex traits because they are influenced by additional factors, such as environmental and epigenetic factors. Not all individuals with the same genotype look or act the same way because appearance and behavior are modified by environmental and growing conditions. Likewise, not all organisms that look alike necessarily have the same genotype.
The term genotype was coined by the Danish botanist Wilhelm Johannsen in 1903.
Phenotype
Any given gene will usually cause an observable change in an organism, known as the phenotype. The terms genotype and phenotype are distinct for at least two reasons:
To distinguish the source of an observer's knowledge (one can know about genotype by observing DNA; one can know about phenotype by observing outward appearance of an organism).
Genotype and phenotype are not always directly correlated. Some genes only express a given phenotype in certain environmental conditions. Conversely, some phenotypes could be the result of multiple genotypes. The genotype is commonly mixed up with the phenotype which describes the result of both the genetic and the environmental factors giving the observed expression (e.g. blue eyes, hair color, or various hereditary diseases).
A simple example to illustrate genotype as distinct from phenotype is the flower colour in pea plants (see Gregor Mendel). There are three available genotypes, PP (homozygous dominant), Pp (heterozygous), and pp (homozygous recessive). All three have different genotypes but the first two have the same phenotype (purple) as distinct from the third (white).
A more technical example to illustrate genotype is the single-nucleotide polymorphism or SNP. A SNP occurs when corresponding sequences of DNA from different individuals differ at one DNA base, for example where the sequence AAGCCTA changes to AAGCTTA. This contains two alleles : C and T. SNPs typically have three genotypes, denoted generically AA Aa and aa. In the example above, the three genotypes would be CC, CT and TT. Other types of genetic marker, such as microsatellites, can have more than two alleles, and thus many different genotypes.
Penetrance is the proportion of individuals showing a specified genotype in their phenotype under a given set of environmental conditions.
Mendelian inheritance
Traits that are determined exclusively by genotype are typically inherited in a Mendelian pattern. These laws of inheritance were described extensively by Gregor Mendel, who performed experiments with pea plants to determine how traits were passed on from generation to generation. He studied phenotypes that were easily observed, such as plant height, petal color, or seed shape. He was able to observe that if he crossed two true-breeding plants with distinct phenotypes, all the offspring would have the same phenotype. For example, when he crossed a tall plant with a short plant, all the resulting plants would be tall. However, when he self-fertilized the plants that resulted, about 1/4 of the second generation would be short. He concluded that some traits were dominant, such as tall height, and others were recessive, like short height. Though Mendel was not aware at the time, each phenotype he studied was controlled by a single gene with two alleles. In the case of plant height, one allele caused the plants to be tall, and the other caused plants to be short. When the tall allele was present, the plant would be tall, even if the plant was heterozygous. In order for the plant to be short, it had to be homozygous for the recessive allele.
One way this can be illustrated is using a Punnett square. In a Punnett square, the genotypes of the parents are placed on the outside. An uppercase letter is typically used to represent the dominant allele, and a lowercase letter is used to represent the recessive allele. The possible genotypes of the offspring can then be determined by combining the parent genotypes. In the example on the right, both parents are heterozygous, with a genotype of Bb. The offspring can inherit a dominant allele from each parent, making them homozygous with a genotype of BB. The offspring can inherit a dominant allele from one parent and a recessive allele from the other parent, making them heterozygous with a genotype of Bb. Finally, the offspring could inherit a recessive allele from each parent, making them homozygous with a genotype of bb. Plants with the BB and Bb genotypes will look the same, since the B allele is dominant. The plant with the bb genotype will have the recessive trait.
These inheritance patterns can also be applied to hereditary diseases or conditions in humans or animals. Some conditions are inherited in an autosomal dominant pattern, meaning individuals with the condition typically have an affected parent as well. A classic pedigree for an autosomal dominant condition shows affected individuals in every generation. Other conditions are inherited in an autosomal recessive pattern, where affected individuals do not typically have an affected parent. Since each parent must have a copy of the recessive allele in order to have an affected offspring, the parents are referred to as carriers of the condition.
In autosomal conditions, the sex of the offspring does not play a role in their risk of being affected. In sex-linked conditions, the sex of the offspring affects their chances of having the condition. In humans, females inherit two X chromosomes, one from each parent, while males inherit an X chromosome from their mother and a Y chromosome from their father. X-linked dominant conditions can be distinguished from autosomal dominant conditions in pedigrees by the lack of transmission from fathers to sons, since affected fathers only pass their X chromosome to their daughters. In X-linked recessive conditions, males are typically affected more commonly because they are hemizygous, with only one X chromosome. In females, the presence of a second X chromosome will prevent the condition from appearing. Females are therefore carriers of the condition and can pass the trait on to their sons.
Mendelian patterns of inheritance can be complicated by additional factors. Some diseases show incomplete penetrance, meaning not all individuals with the disease-causing allele develop signs or symptoms of the disease. Penetrance can also be age-dependent, meaning signs or symptoms of disease are not visible until later in life. For example, Huntington disease is an autosomal dominant condition, but up to 25% of individuals with the affected genotype will not develop symptoms until after age 50. Another factor that can complicate Mendelian inheritance patterns is variable expressivity, in which individuals with the same genotype show different signs or symptoms of disease. For example, individuals with polydactyly can have a variable number of extra digits.
Non-Mendelian inheritance
Many traits are not inherited in a Mendelian fashion, but have more complex patterns of inheritance.
Incomplete dominance
For some traits, neither allele is completely dominant. Heterozygotes often have an appearance somewhere in between those of homozygotes. For example, a cross between true-breeding red and white Mirabilis jalapa results in pink flowers.
Codominance
Codominance refers to traits in which both alleles are expressed in the offspring in approximately equal amounts. A classic example is the ABO blood group system in humans, where both the A and B alleles are expressed when they are present. Individuals with the AB genotype have both A and B proteins expressed on their red blood cells.
Epistasis
Epistasis is when the phenotype of one gene is affected by one or more other genes. This is often through some sort of masking effect of one gene on the other. For example, the "A" gene codes for hair color, a dominant "A" allele codes for brown hair, and a recessive "a" allele codes for blonde hair, but a separate "B" gene controls hair growth, and a recessive "b" allele causes baldness. If the individual has the BB or Bb genotype, then they produce hair and the hair color phenotype can be observed, but if the individual has a bb genotype, then the person is bald which masks the A gene entirely.
Polygenic traits
A polygenic trait is one whose phenotype is dependent on the additive effects of multiple genes. The contributions of each of these genes are typically small and add up to a final phenotype with a large amount of variation. A well studied example of this is the number of sensory bristles on a fly. These types of additive effects is also the explanation for the amount of variation in human eye color.
Genotyping
Genotyping refers to the method used to determine an individual's genotype. There are a variety of techniques that can be used to assess genotype. The genotyping method typically depends on what information is being sought. Many techniques initially require amplification of the DNA sample, which is commonly done using PCR.
Some techniques are designed to investigate specific SNPs or alleles in a particular gene or set of genes, such as whether an individual is a carrier for a particular condition. This can be done via a variety of techniques, including allele specific oligonucleotide (ASO) probes or DNA sequencing. Tools such as multiplex ligation-dependent probe amplification can also be used to look for duplications or deletions of genes or gene sections. Other techniques are meant to assess a large number of SNPs across the genome, such as SNP arrays. This type of technology is commonly used for genome-wide association studies.
Large-scale techniques to assess the entire genome are also available. This includes karyotyping to determine the number of chromosomes an individual has and chromosomal microarrays to assess for large duplications or deletions in the chromosome. More detailed information can be determined using exome sequencing, which provides the specific sequence of all DNA in the coding region of the genome, or whole genome sequencing, which sequences the entire genome including non-coding regions.
Genotype encoding
In linear models, the genotypes can be encoded in different manners. Let us consider a biallelic locus with two possible alleles, encoded by and . We consider to correspond to the dominant allele to the reference allele . The following table details the different encoding.
| Biology and health sciences | Genetics | Biology |
12806 | https://en.wikipedia.org/wiki/Gemstone | Gemstone | A gemstone (also called a fine gem, jewel, precious stone, semiprecious stone, or simply gem) is a piece of mineral crystal which, when cut or polished, is used to make jewelry or other adornments. Certain rocks (such as lapis lazuli, opal, and obsidian) and occasionally organic materials that are not minerals (such as amber, jet, and pearl) may also be used for jewelry and are therefore often considered to be gemstones as well. Most gemstones are hard, but some softer minerals such as brazilianite may be used in jewelry because of their color or luster or other physical properties that have aesthetic value. However, generally speaking, soft minerals are not typically used as gemstones by virtue of their brittleness and lack of durability.
Found all over the world, the industry of coloured gemstones (i.e. anything other than diamonds) is currently estimated at US$1.55billion and is projected to steadily increase to a value of $4.46billion by 2033.
A gem expert is a gemologist, a gem maker is called a lapidarist or gemcutter; a diamond cutter is called a diamantaire.
Characteristics and classification
The traditional classification in the West, which goes back to the ancient Greeks, begins with a distinction between precious and semi-precious; similar distinctions are made in other cultures. In modern use, the precious stones are emerald, ruby, sapphire and diamond, with all other gemstones being semi-precious. This distinction reflects the rarity of the respective stones in ancient times, as well as their quality: all are translucent, with fine color in their purest forms (except for the colorless diamond), and very hard with a hardness score of 8 to 10 on the Mohs scale. Other stones are classified by their color, translucency, and hardness. The traditional distinction does not necessarily reflect modern values; for example, while garnets are relatively inexpensive, a green garnet called tsavorite can be far more valuable than a mid-quality emerald. Another traditional term for semi-precious gemstones used in art history and archaeology is hardstone. Use of the terms 'precious' and 'semi-precious' in a commercial context is, arguably, misleading in that it suggests certain stones are more valuable than others when this is not reflected in the actual market value, although it would generally be correct if referring to desirability.
In modern times gemstones are identified by gemologists, who describe gems and their characteristics using technical terminology specific to the field of gemology. The first characteristic a gemologist uses to identify a gemstone is its chemical composition. For example, diamonds are made of carbon () and rubies of aluminium oxide (). Many gems are crystals which are classified by their crystal system such as cubic or trigonal or monoclinic. Another term used is habit, the form the gem is usually found in. For example, diamonds, which have a cubic crystal system, are often found as octahedrons.
Gemstones are classified into different groups, species, and varieties. For example, ruby is the red variety of the species corundum, while any other color of corundum is considered sapphire. Other examples are the emerald (green), aquamarine (blue), red beryl (red), goshenite (colorless), heliodor (yellow), and morganite (pink), which are all varieties of the mineral species beryl.
Gems are characterized in terms of their color (hue, tone and saturation), optical phenomena, luster, refractive index, birefringence, dispersion, specific gravity, hardness, cleavage, and fracture. They may exhibit pleochroism or double refraction. They may have luminescence and a distinctive absorption spectrum. Gemstones may also be classified in terms of their "water". This is a recognized grading of the gem's luster, transparency, or "brilliance". Very transparent gems are considered "first water", while "second" or "third water" gems are those of a lesser transparency. Additionally, material or flaws within a stone may be present as inclusions.
Value
Gemstones have no universally accepted grading system. Diamonds are graded using a system developed by the Gemological Institute of America (GIA) in the early 1950s. Historically, all gemstones were graded using the naked eye. The GIA system included a major innovation: the introduction of 10x magnification as the standard for grading clarity. Other gemstones are still graded using the naked eye (assuming 20/20 vision).
A mnemonic device, the "four Cs" (color, cut, clarity, and carats), has been introduced to help describe the factors used to grade a diamond. With modification, these categories can be useful in understanding the grading of all gemstones. The four criteria carry different weights depending upon whether they are applied to colored gemstones or to colorless diamonds. In diamonds, the cut is the primary determinant of value, followed by clarity and color. An ideally cut diamond will sparkle, to break down light into its constituent rainbow colors (dispersion), chop it up into bright little pieces (scintillation), and deliver it to the eye (brilliance). In its rough crystalline form, a diamond will do none of these things; it requires proper fashioning and this is called "cut". In gemstones that have color, including colored diamonds, the purity, and beauty of that color is the primary determinant of quality.
Physical characteristics that make a colored stone valuable are color, clarity to a lesser extent (emeralds will always have a number of inclusions), cut, unusual optical phenomena within the stone such as color zoning (the uneven distribution of coloring within a gem) and asteria (star effects).
Apart from the more generic and commonly used gemstones such as from diamonds, rubies, sapphires, and emeralds, pearls and opal have also been defined as precious in the jewellery trade. Up to the discoveries of bulk amethyst in Brazil in the 19th century, amethyst was considered a "precious stone" as well, going back to ancient Greece. Even in the last century certain stones such as aquamarine, peridot and cat's eye (cymophane) have been popular and hence been regarded as precious, thus reinforcing the notion that a mineral's rarity may have been implicated in its classification as a precious stone and thus contribute to its value.
Today the gemstone trade no longer makes such a distinction. Many gemstones are used in even the most expensive jewelry, depending on the brand-name of the designer, fashion trends, market supply, treatments, etc. Nevertheless, diamonds, rubies, sapphires, and emeralds still have a reputation that exceeds those of other gemstones.
Rare or unusual gemstones, generally understood to include those gemstones which occur so infrequently in gem quality that they are scarcely known except to connoisseurs, include andalusite, axinite, cassiterite, clinohumite, painite and red beryl.
Gemstone pricing and value are governed by factors and characteristics in the quality of the stone. These characteristics include clarity, rarity, freedom from defects, the beauty of the stone, as well as the demand for such stones. There are different pricing influencers for both colored gemstones, and for diamonds. The pricing on colored stones is determined by market supply-and-demand, but diamonds are more intricate.
In the addition to the aesthetic and adorning/ornamental purpose of gemstones, there are many proponents of energy medicine who also value gemstones on the basis of their alleged healing powers.
A gemstone that has been rising in popularity is Cuprian Elbaite Tourmaline which is also called "Paraiba Tourmaline". It was first discovered in the late 1980s in Paraíba, Brazil and later in Mozambique and Nigeria. It is famous for its glowing neon blue color. Paraiba Tourmaline has become one of the most popular gemstones in recent times thanks to its color and is considered to be one of the important gemstones after rubies, emeralds, and sapphires according to Gübelin Gemlab. Even though it is a tourmaline, Paraiba Tourmaline is one of the most expensive gemstones.
Grading
There are a number of laboratories which grade and provide reports on gemstones.
Gemological Institute of America (GIA), the main provider of education services and diamond grading reports
International Gemological Institute (IGI), independent laboratory for grading and evaluation of diamonds, jewelry, and colored stones
Hoge Raad Voor Diamant (HRD Antwerp), The Diamond High Council, Belgium is one of Europe's oldest laboratories; its main stakeholder is the Antwerp World Diamond Centre
American Gemological Society (AGS) is not as widely recognized nor as old as the GIA
American Gem Trade Laboratory which is part of the American Gem Trade Association (AGTA), a trade organization of jewelers and dealers of colored stones
American Gemological Laboratories (AGL), owned by Christopher P. Smith
European Gemological Laboratory (EGL), founded in 1974 by Guy Margel in Belgium
Gemmological Association of All Japan (GAAJ-ZENHOKYO), Zenhokyo, Japan, active in gemological research
The Gem and Jewelry Institute of Thailand (Public Organization) or GIT, Thailand's national institute for gemological research and gem testing, Bangkok
Gemmology Institute of Southern Africa, Africa's premium gem laboratory
Asian Institute of Gemological Sciences (AIGS), the oldest gemological institute in South East Asia, involved in gemological education and gem testing
Swiss Gemmological Institute (SSEF), founded by Henry Hänni, focusing on colored gemstones and the identification of natural pearls
Gübelin Gem Lab, the traditional Swiss lab founded by Eduard Gübelin
Each laboratory has its own methodology to evaluate gemstones. A stone can be called "pink" by one lab while another lab calls it "padparadscha". One lab can conclude a stone is untreated, while another lab might conclude that it is heat-treated. To minimize such differences, seven of the most respected labs, AGTA-GTL (New York), CISGEM (Milano), GAAJ-ZENHOKYO (Tokyo), GIA (Carlsbad), GIT (Bangkok), Gübelin (Lucerne) and SSEF (Basel), have established the Laboratory Manual Harmonisation Committee (LMHC), for the standardization of wording reports, promotion of certain analytical methods and interpretation of results. Country of origin has sometimes been difficult to determine, due to the constant discovery of new source locations. Determining a "country of origin" is thus much more difficult than determining other aspects of a gem (such as cut, clarity, etc.).
Gem dealers are aware of the differences between gem laboratories and will make use of the discrepancies to obtain the best possible certificate.
Cutting and polishing
A few gemstones are used as gems in the crystal or other forms in which they are found. Most, however, are cut and polished for usage as jewelry. The two main classifications are as follows:
Stones cut as smooth, dome-shaped stones called cabochons or simply cab. These have been a popular shape since ancient time and is more durable than faceted gems.
Stones which are cut with a faceting machine by polishing small flat windows called facets at regular intervals at exact angles.
Stones which are opaque or semi-opaque such as opal, turquoise, variscite, etc. are commonly cut as cabochons. These gems are designed to show the stone's color, luster and other surface properties as opposed to internal reflection properties like brilliance. Grinding wheels and polishing agents are used to grind, shape, and polish the smooth dome shape of the stones.
Gems that are transparent are normally faceted, a method that shows the optical properties of the stone's interior to its best advantage by maximizing reflected light which is perceived by the viewer as sparkle. There are many commonly used shapes for faceted stones. The facets must be cut at the proper angles, which varies depending on the optical properties of the gem. If the angles are too steep or too shallow, the light will pass through and not be reflected back toward the viewer. The faceting machine is used to hold the stone onto a flat lap for cutting and polishing the flat facets. Rarely, some cutters use special curved laps to cut and polish curved facets.
Colors
The color of any material is due to the nature of light itself. Daylight, often called white light, is all of the colors of the spectrum combined. When light strikes a material, most of the light is absorbed while a smaller amount of a particular frequency or wavelength is reflected. The part that is reflected reaches the eye as the perceived color. A ruby appears red because it absorbs all other colors of white light while reflecting red.
A material which is mostly the same can exhibit different colors. For example, ruby and sapphire have the same primary chemical composition (both are corundum) but exhibit different colors because of impurities which absorb and reflect different wavelengths of light depending on their individual compositions. Even the same named gemstone can occur in many different colors: sapphires show different shades of blue and pink and "fancy sapphires" exhibit a whole range of other colors from yellow to orange-pink, the latter called "padparadscha sapphire".
This difference in color is based on the atomic structure of the stone. Although the different stones formally have the same chemical composition and structure, they are not exactly the same. Every now and then an atom is replaced by a completely different atom, sometimes as few as one in a million atoms. These so-called impurities are sufficient to absorb certain colors and leave the other colors unaffected. For example, beryl, which is colorless in its pure mineral form, becomes emerald with chromium impurities. If manganese is added instead of chromium, beryl becomes pink morganite. With iron, it becomes aquamarine.Some gemstone treatments make use of the fact that these impurities can be "manipulated", thus changing the color of the gem.
Treatment
Gemstones are often treated to enhance the color or clarity of the stone. In some cases, the treatment applied to the gemstone can also increase its durability. Even though natural gemstones can be transformed using the traditional method of cutting and polishing, other treatment options allow the stone's appearance to be enhanced. Depending on the type and extent of treatment, they can affect the value of the stone. Some treatments are used widely because the resulting gem is stable, while others are not accepted most commonly because the gem color is unstable and may revert to the original tone.
Early history
Before the innovation of modern-day tools, thousands of years ago, people were recorded to use a variety of techniques to treat and enhance gemstones. Some of the earliest methods of gemstone treatment date back to the Minoan Age, for example foiling, which is where metal foil is used to enhance a gemstone's colour. Other methods recorded 2000 years ago in the book Natural History by Pliny the Elder include oiling and dyeing/staining.
Heat
Heat can either improve or spoil gemstone color or clarity. The heating process has been well known to gem miners and cutters for centuries, and in many stone types heating is a common practice. Most citrine is made by heating amethyst, and partial heating with a strong gradient results in "ametrine" – a stone partly amethyst and partly citrine. Aquamarine is often heated to remove yellow tones, or to change green colors into the more desirable blue, or enhance its existing blue color to a deeper blue.
Nearly all tanzanite is heated at low temperatures to remove brown undertones and give a more desirable blue / purple color. A considerable portion of all sapphire and ruby is treated with a variety of heat treatments to improve both color and clarity.
When jewelry containing diamonds is heated for repairs, the diamond should be protected with boric acid; otherwise, the diamond, which is pure carbon, could be burned on the surface or even burned completely up. When jewelry containing sapphires or rubies is heated, those stones should not be coated with boric acid (which can etch the surface) or any other substance. They do not have to be protected from burning, like a diamond (although the stones do need to be protected from heat stress fracture by immersing the part of the jewelry with stones in the water when metal parts are heated).
Radiation
The irradiation process is widely practiced in jewelry industry and enabled the creation of gemstone colors that do not exist or are extremely rare in nature. However, particularly when done in a nuclear reactor, the processes can make gemstones radioactive. Health risks related to the residual radioactivity of the treated gemstones have led to government regulations in many countries.
Virtually all blue topaz, both the lighter and the darker blue shades such as "London" blue, has been irradiated to change the color from white to blue. Most green quartz (Oro Verde) are also irradiated to achieve the yellow-green color. Diamonds are mainly irradiated to become blue-green or green, although other colors are possible. When light-to-medium-yellow diamonds are treated with gamma rays they may become green; with a high-energy electron beam, blue.
Waxing/oiling
Emeralds containing natural fissures are sometimes filled with wax or oil to disguise them. This wax or oil is also colored to make the emerald appear of better color as well as clarity. Turquoise is also commonly treated in a similar manner.
Fracture filling
Fracture filling has been in use with different gemstones such as diamonds, emeralds, and sapphires. In 2006 "glass-filled rubies" received publicity. Rubies over 10 carats (2 g) with large fractures were filled with lead glass, thus dramatically improving the appearance (of larger rubies in particular). Such treatments are fairly easy to detect.
Bleaching
Another treatment method that is commonly used to treat gemstones is bleaching. This method uses a chemical in order to reduce the colour of the gem. After bleaching, a combination treatment can be done by dying the gemstone once the unwanted colours are removed. Hydrogen peroxide is the most commonly used product used to alter gemstones and have notably been used to treat jade and pearls. The treatment of bleaching can also be followed by impregnation, which allows the gemstone's durability to be increased.
Socioeconomic issues in the gemstone industry
The socio-economic dynamics of the gemstone industry are shaped by market forces and consumer preferences and typically go undiscussed. Changes in demand and prices can significantly affect the livelihoods of those involved in gemstone mining and trade, particularly in developing countries where the industry serves as a crucial source of income.
A situation that arises as a result of this is the exploitation of natural resources and labor within gemstone mining operations. Many mines, particularly in developing countries, face challenges such as inadequate safety measures, low wages, and poor working conditions. Miners, often from disadvantaged backgrounds, endure hazardous working conditions and receive meager wages, contributing to cycles of poverty and exploitation. Gemstone mining operations are frequently conducted in remote or underdeveloped areas, lacking proper infrastructure and access to essential services such as healthcare and education. This further contributes to the pre-existing socio-economic disparities and obstructs community development such that the benefits of gemstone extraction may not adequately reach those directly involved in the process.
Another such issue revolves around environmental degradation resulting from mining activities. Environmental degradation can pose long-term threats to ecosystems and biodiversity, further worsening the socio-economic state in affected regions. Unregulated mining practices often result in deforestation, soil erosion, and water contamination thus threatening ecosystems and biodiversity. Unregulated mining activity can also cause depletion of natural resources, thus diminishing the prospects for sustainable development. The environmental impact of gemstone mining not only poses a threat to ecosystems but also undermines the long-term viability of the industry by diminishing the quality and quantity of available resources.
Furthermore, the gemstone industry is also susceptible to issues related to transparency and ethics, which impact both producers and consumers. The lack of standardized certification processes and the prevalence of illicit practices undermine market integrity and trust. The lack of transparency and accountability in the supply chain aggravates pre-existing inequalities, as middlemen and corporations often capture a disproportionate share of the profits. As a result, the unequal distribution of profits along the supply chain does little to improve socio-economic inequalities, particularly in regions where gemstones are mined.
Addressing these socio-economic challenges requires intensive effort from various stakeholders, including governments, industry executives, and society, to promote sustainable practices and ensure equitable outcomes for all involved parties. Implementing and enforcing regulations to ensure fair labor practices, environmental sustainability, and ethical sourcing is essential. Additionally, investing in community development projects, such as education and healthcare initiatives, can help alleviate poverty and empower marginalized communities dependent on the gemstone industry. Collaboration across sectors is crucial for fostering a more equitable and sustainable gemstone trade that benefits both producers and consumers while respecting human rights and environmental integrity.
Synthetic and artificial gemstones
Synthetic gemstones are distinct from imitation or simulated gems.
Synthetic gems are physically, optically, and chemically identical to the natural stone, but are created in a laboratory. Imitation or simulated stones are chemically different from the natural stone, but may appear quite similar to it; they can be more easily manufactured synthetic gemstones of a different mineral (spinel), glass, plastic, resins, or other compounds.
Examples of simulated or imitation stones include cubic zirconia, composed of zirconium oxide, synthetic moissanite, and uncolored, synthetic corundum or spinels; all of which are diamond simulants. The simulants imitate the look and color of the real stone but possess neither their chemical nor physical characteristics. In general, all are less hard than diamond. Moissanite actually has a higher refractive index than diamond, and when presented beside an equivalently sized and cut diamond will show more "fire".
Cultured, synthetic, or "lab-created" gemstones are not imitations: The bulk mineral and trace coloring elements are the same in both. For example, diamonds, rubies, sapphires, and emeralds have been manufactured in labs that possess chemical and physical characteristics identical to the naturally occurring variety. Synthetic (lab created) corundum, including ruby and sapphire, is very common and costs much less than the natural stones. Small synthetic diamonds have been manufactured in large quantities as industrial abrasives, although larger gem-quality synthetic diamonds are becoming available in multiple carats.
Whether a gemstone is a natural stone or synthetic, the chemical, physical, and optical characteristics are the same: They are composed of the same mineral and are colored by the same trace materials, have the same hardness and density and strength, and show the same color spectrum, refractive index, and birefringence (if any). Lab-created stones tend to have a more vivid color since impurities common in natural stones are not present in the synthetic stone. Synthetics are made free of common naturally occurring impurities that reduce gem clarity or color unless intentionally added in order to provide a more drab, natural appearance, or to deceive an assayer. On the other hand, synthetics often show flaws not seen in natural stones, such as minute particles of corroded metal from lab trays used during synthesis.
Types
Some gemstones are more difficult to synthesize than others and not all stones are commercially viable to attempt to synthesize. These are the most common on the market currently.
Synthetic corundum
Synthetic corundum includes ruby (red variation) and sapphire (other color variations), both of which are considered highly desired and valued. Ruby was the first gemstone to be synthesized by Auguste Verneuil with his development of the flame-fusion process in 1902. Synthetic corundum continues to be made typically by flame-fusion as it is most cost-effective, but can also be produced through flux growth and hydrothermal growth.
Synthetic beryls
The most common synthesized beryl is emerald (green). Yellow, red and blue beryls are possible but much more rare. Synthetic emerald became possible with the development of the flux growth process and is produced in this way and well as hydrothermal growth.
Synthetic quartz
Types of synthetic quartz include citrine, rose quartz, and amethyst. Natural occurring quartz is not rare, but is nevertheless synthetically produced as it has practical application outside of aesthetic purposes. Quartz generates an electric current when under pressure and is used in watches, clocks, and oscillators.
Synthetic spinel
Synthetic spinel was first produced by accident. It can be created in any color making it popular to simulate various natural gemstones. It is created through flux growth and hydrothermal growth.
Creation process
There are two main categories for creation of these minerals: melt or solution processes.
Verneuil flame fusion process (melt process)
The flame fusion process was the first process used which successfully created large quantities of synthetic gemstones to be sold on the market. This remains the most cost effective and common method of creating corundums today.
The flame fusion process is completed in a Verneuil furnace. The furnace consists of an inverted blowpipe burner which produces an extremely hot oxyhydrogen flame, a powder dispenser, and a ceramic pedestal. A chemical powder which corresponds to the desired gemstone is passed through this flames. This melts the ingredients which drop on to a plate and solidify into a crystal called a boule. For corundum the flame must be 2000 °C. This process takes hours and yields a crystal with the same properties as its natural counterpart.
To produce corundum, a pure aluminium powder is used with different additives to achieve different colors.
Chromic oxide for ruby
Iron and titanium oxide for blue sapphire
Nickel oxide for yellow sapphire
Nickel, chromium and iron for orange sapphire
Manganese for pink sapphire
Copper for blue-green sapphire
Cobalt for dark blue sapphire
Czochralski process (melt process)
In 1918 this process was developed by J. Czocharalski and is also referred to as the "crystal pulling" method. In this process, the required gemstone materials are added to a crucible. A seed stone is placed into the melt in the crucible. As the gem begins to crystallize on the seed, the seed is pulled away and the gem continues to grow. This is used for corundum but is currently the least popular method.
Flux growth (solution process)
The flux growth process was the first process able to synthesize emerald. Flux growth begins with a crucible which can withstand high heat; either graphite or platinum which is filled with a molten liquid referred to as flux. The specific gem ingredients are added and dissolved in this fluid and recrystallize to form the desired gemstone.This is a longer process compared to the flame fusion process and can take two months up to a year depending on the desired final size.
Hydrothermal growth (solution process)
The hydrothermal growth process attempts to imitate the natural growth process of minerals. The required gem materials are sealed in a container of water and placed under extreme pressure. The water is heated beyond its boiling point which allows normally insoluble materials to dissolve. As more material cannot be added once the container is sealed, in order to create a larger gem the process would begin with a "seed" stone from a previous batch which the new material will crystallize on. This process takes a few weeks to complete.
Characteristics
Synthetic gemstones share chemical and physical properties with natural gemstones, but there are some slight differences that can be used to discern synthetic from natural. These differences are slight and often require microscopy as a tool to distinguish differences. Undetectable synthetics pose a threat to the market if they are able to be sold as rare natural gemstones. Because of this there are certain characteristic gemologists look for. Each crystal is characteristic to the environment and growth process under which it was created.
Gemstones created from the flame-fusion process may have
small air bubbles which were trapped inside the boule during formation process
visible banding from formation of the boule
chatter marks which on the surface which appear crack like which are caused from damage during polishing of the gemstone
Gemstones created from flux melt process may have
small cavities which are filled with flux solution
inclusions in the gemstone from crucible used
Gemstones created from hydrothermal growth may have
inclusions from container used
History
Prior to development of synthesising processes the alternatives on the market to natural gemstones were imitations or fake. In 1837, the first successful synthesis of ruby occurred. French chemist Marc Gaudin managed to produce small crystals of ruby from melting together potassium aluminium sulphate and potassium chromate through what would later be known as the flux melt process. Following this, another French chemist Fremy was able to grow large quantities of small ruby crystals using a lead flux.
A few years later an alternative to flux melt was developed which led to the introduction of what was labeled "reconstructed ruby" to the market. Reconstructed ruby was sold as a process which produced larger rubies from melting together bits of natural ruby. In later attempts to recreate this process it was found to not be possible and is believed reconstructed rubies were most likely created using a multi-step method of melting of ruby powder.
Auguste Verneuil, a student of Fremy, went on to develop flame-fusion as an alternative to the flux-melt method. He developed large furnaces which were able to produce large quantities of corundums more efficiently and shifted the gemstone market dramatically. This process is still used today and the furnaces have not changed much from the original design. World production of corundum using this method reaches 1000 million carats a year.
List of rare gemstones
Painite was discovered in 1956 in Ohngaing in Myanmar. The mineral was named in honor of the British gemologist Arthur Charles Davy Pain. At one point it was considered the rarest mineral on Earth.
Tanzanite was discovered in 1967 in Northern Tanzania. With its supply possibly declining in the next 30 years, this gemstone is considered to be more rare than a diamond. This type of gemstone receives its vibrant blue from being heated.
Hibonite was discovered in 1956 in Madagascar. It was named after the discoverer, French geologist Paul Hibon. Gem quality hibonite has been found only in Myanmar.
Red beryl or bixbite was discovered in an area near Beaver, Utah in 1904 and named after the American mineralogist Maynard Bixby.
Jeremejevite was discovered in 1883 in Russia and named after its discoverer, Pawel Wladimirowich Jeremejew (1830–1899).
Chambersite was discovered in 1957 in Chambers County, Texas, US, and named after the deposit's location.
Taaffeite was discovered in 1945. It was named after the discoverer, the Irish gemologist Count Edward Charles Richard Taaffe.
Musgravite was discovered in 1967 in the Musgrave Mountains in South Australia and named for the location.
Black opal is directly mined in New South Wales, Australia, making it the rarest type of opal. Having a darker composition, this gemstone can be in a variety of colours.
Grandidierite was discovered by Antoine François Alfred Lacroix (1863–1948) in 1902 in Tuléar Province, Madagascar. It was named in honor of the French naturalist and explorer Alfred Grandidier (1836–1912).
Poudretteite was discovered in 1965 at the Poudrette Quarry in Canada and named after the quarry's owners and operators, the Poudrette family.
Serendibite was discovered in Sri Lanka by Sunil Palitha Gunasekera in 1902 and named after Serendib, the old Arabic name for Sri Lanka.
Zektzerite was discovered by Bart Cannon in 1968 on Kangaroo Ridge near Washington Pass in Okanogan County, Washington, USA. The mineral was named in honor of mathematician and geologist Jack Zektzer, who presented the material for study in 1976.
In popular culture
French singer-songwriter Nolwenn Leroy was inspired by the gemstones for her 2017 album Gemme (meaning gemstone in French) and the single of the same name.
Land of the Lustrous is a Japanese manga and anime series whose main characters are depicted as humanoid jewels.
Steven Universe is an American animated television series whose main characters are magical gemstones who project themselves as feminine humanoids.
| Physical sciences | Mineralogy | null |
12808 | https://en.wikipedia.org/wiki/GSM | GSM | The Global System for Mobile Communications (GSM) is a standard developed by the European Telecommunications Standards Institute (ETSI) to describe the protocols for second-generation (2G) digital cellular networks used by mobile devices such as mobile phones and tablets. GSM is also a trade mark owned by the GSM Association. "GSM" may also refer to the voice codec initially used in GSM.
It was first implemented in Finland in December 1991. By the mid-2010s, it became a global standard for mobile communications achieving over 90% market share, and operating in over 193 countries and territories.
2G networks developed as a replacement for first generation (1G) analog cellular networks. The GSM standard originally described a digital, circuit-switched network optimized for full duplex voice telephony. This expanded over time to include data communications, first by circuit-switched transport, then by packet data transport via General Packet Radio Service (GPRS), and Enhanced Data Rates for GSM Evolution (EDGE).
Subsequently, the 3GPP developed third-generation (3G) UMTS standards, followed by the fourth-generation (4G) LTE Advanced and the fifth-generation 5G standards, which do not form part of the ETSI GSM standard.
Beginning in the late 2010s, various carriers worldwide started to shut down their GSM networks. Nevertheless, as a result of the network's widespread use, the acronym "GSM" is still used as a generic term for the plethora of G mobile phone technologies evolved from it.
History
Initial European development
In 1983, work began to develop a European standard for digital cellular voice telecommunications when the European Conference of Postal and Telecommunications Administrations (CEPT) set up the Groupe Spécial Mobile (GSM) committee and later provided a permanent technical-support group based in Paris. Five years later, in 1987, 15 representatives from 13 European countries signed a memorandum of understanding in Copenhagen to develop and deploy a common cellular telephone system across Europe, and EU rules were passed to make GSM a mandatory standard. The decision to develop a continental standard eventually resulted in a unified, open, standard-based network which was larger than that in the United States.
In February 1987 Europe produced the first agreed GSM Technical Specification. Ministers from the four big EU countries cemented their political support for GSM with the Bonn Declaration on Global Information Networks in May and the GSM MoU was tabled for signature in September. The MoU drew in mobile operators from across Europe to pledge to invest in new GSM networks to an ambitious common date.
In this short 38-week period the whole of Europe (countries and industries) had been brought behind GSM in a rare unity and speed guided by four public officials: Armin Silberhorn (Germany), Stephen Temple (UK), Philippe Dupuis (France), and Renzo Failli (Italy). In 1989 the Groupe Spécial Mobile committee was transferred from CEPT to the European Telecommunications Standards Institute (ETSI).
The IEEE/RSE awarded to Thomas Haug and Philippe Dupuis the 2018 James Clerk Maxwell medal for their "leadership in the development of the first international mobile communications standard with subsequent evolution into worldwide smartphone data communication". The GSM (2G) has evolved into 3G, 4G and 5G.
First networks
In parallel France and Germany signed a joint development agreement in 1984 and were joined by Italy and the UK in 1986. In 1986, the European Commission proposed reserving the 900 MHz spectrum band for GSM. It was long believed that the former Finnish prime minister Harri Holkeri made the world's first GSM call on 1 July 1991, calling Kaarina Suonio (deputy mayor of the city of Tampere) using a network built by Nokia and Siemens and operated by Radiolinja. In 2021 a former Nokia engineer Pekka Lonka revealed to making a test call just a couple of hours earlier. "World's first GSM call was actually made by me. I called Marjo Jousinen, in Salo.", Lonka informed. The following year saw the sending of the first short messaging service (SMS or "text message") message, and Vodafone UK and Telecom Finland signed the first international roaming agreement.
Enhancements
Work began in 1991 to expand the GSM standard to the 1800 MHz frequency band and the first 1800 MHz network became operational in the UK by 1993, called the DCS 1800. Also that year, Telstra became the first network operator to deploy a GSM network outside Europe and the first practical hand-held GSM mobile phone became available.
In 1995 fax, data and SMS messaging services were launched commercially, the first 1900 MHz GSM network became operational in the United States and GSM subscribers worldwide exceeded 10 million. In the same year, the GSM Association formed. Pre-paid GSM SIM cards were launched in 1996 and worldwide GSM subscribers passed 100 million in 1998.
In 2000 the first commercial General Packet Radio Service (GPRS) services were launched and the first GPRS-compatible handsets became available for sale. In 2001, the first UMTS (W-CDMA) network was launched, a 3G technology that is not part of GSM. Worldwide GSM subscribers exceeded 500 million. In 2002, the first Multimedia Messaging Service (MMS) was introduced and the first GSM network in the 800 MHz frequency band became operational. Enhanced Data rates for GSM Evolution (EDGE) services first became operational in a network in 2003, and the number of worldwide GSM subscribers exceeded 1 billion in 2004.
By 2005 GSM networks accounted for more than 75% of the worldwide cellular network market, serving 1.5 billion subscribers. In 2005, the first HSDPA-capable network also became operational. The first HSUPA network launched in 2007. (High Speed Packet Access (HSPA) and its uplink and downlink versions are 3G technologies, not part of GSM.) Worldwide GSM subscribers exceeded three billion in 2008.
Adoption
The GSM Association estimated in 2011 that technologies defined in the GSM standard served 80% of the mobile market, encompassing more than 5 billion people across more than 212 countries and territories, making GSM the most ubiquitous of the many standards for cellular networks.
GSM is a second-generation (2G) standard employing time-division multiple-access (TDMA) spectrum-sharing, issued by the European Telecommunications Standards Institute (ETSI). The GSM standard does not include the 3G Universal Mobile Telecommunications System (UMTS), code-division multiple access (CDMA) technology, nor the 4G LTE orthogonal frequency-division multiple access (OFDMA) technology standards issued by the 3GPP.
GSM, for the first time, set a common standard for Europe for wireless networks. It was also adopted by many countries outside Europe. This allowed subscribers to use other GSM networks that have roaming agreements with each other. The common standard reduced research and development costs, since hardware and software could be sold with only minor adaptations for the local market.
Discontinuation
Telstra in Australia shut down its 2G GSM network on 1 December 2016, the first mobile network operator to decommission a GSM network. The second mobile provider to shut down its GSM network (on 1 January 2017) was AT&T Mobility from the United States.
Optus in Australia completed the shut down of its 2G GSM network on 1 August 2017, part of the Optus GSM network covering Western Australia and the Northern Territory had earlier in the year been shut down in April 2017.
Singapore shut down 2G services entirely in April 2017.
Technical details
Network structure
The network is structured into several discrete sections:
Base station subsystem – the base stations and their controllers
Network and Switching Subsystem – the part of the network most similar to a fixed network, sometimes just called the "core network"
GPRS Core Network – the optional part which allows packet-based Internet connections
Operations support system (OSS) – network maintenance
Base-station subsystem
GSM utilizes a cellular network, meaning that cell phones connect to it by searching for cells in the immediate vicinity. There are five different cell sizes in a GSM network:
macro
micro
pico
femto, and
umbrella cells
The coverage area of each cell varies according to the implementation environment. Macro cells can be regarded as cells where the base-station antenna is installed on a mast or a building above average rooftop level. Micro cells are cells whose antenna height is under average rooftop level; they are typically deployed in urban areas. Picocells are small cells whose coverage diameter is a few dozen meters; they are mainly used indoors. Femtocells are cells designed for use in residential or small-business environments and connect to a telecommunications service provider's network via a broadband-internet connection. Umbrella cells are used to cover shadowed regions of smaller cells and to fill in gaps in coverage between those cells.
Cell horizontal radius varies – depending on antenna height, antenna gain, and propagation conditions – from a couple of hundred meters to several tens of kilometers. The longest distance the GSM specification supports in practical use is . There are also several implementations of the concept of an extended cell, where the cell radius could be double or even more, depending on the antenna system, the type of terrain, and the timing advance.
GSM supports indoor coverage – achievable by using an indoor picocell base station, or an indoor repeater with distributed indoor antennas fed through power splitters – to deliver the radio signals from an antenna outdoors to the separate indoor distributed antenna system. Picocells are typically deployed when significant call capacity is needed indoors, as in shopping centers or airports. However, this is not a prerequisite, since indoor coverage is also provided by in-building penetration of radio signals from any nearby cell.
GSM carrier frequencies
GSM networks operate in a number of different carrier frequency ranges (separated into GSM frequency ranges for 2G and UMTS frequency bands for 3G), with most 2G GSM networks operating in the 900 MHz or 1800 MHz bands. Where these bands were already allocated, the 850 MHz and 1900 MHz bands were used instead (for example in Canada and the United States). In rare cases the 400 and 450 MHz frequency bands are assigned in some countries because they were previously used for first-generation systems.
For comparison, most 3G networks in Europe operate in the 2100 MHz frequency band. For more information on worldwide GSM frequency usage, see GSM frequency bands.
Regardless of the frequency selected by an operator, it is divided into timeslots for individual phones. This allows eight full-rate or sixteen half-rate speech channels per radio frequency. These eight radio timeslots (or burst periods) are grouped into a TDMA frame. Half-rate channels use alternate frames in the same timeslot. The channel data rate for all is and the frame duration is TDMA noise is interference that can be heard on speakers near a GSM phone using TDMA, audible as a buzzing sound.
The transmission power in the handset is limited to a maximum of 2 watts in and in .
Voice codecs
GSM has used a variety of voice codecs to squeeze 3.1 kHz audio into between 7 and 13 kbit/s. Originally, two codecs, named after the types of data channel they were allocated, were used, called Half Rate (6.5 kbit/s) and Full Rate (13 kbit/s). These used a system based on linear predictive coding (LPC). In addition to being efficient with bitrates, these codecs also made it easier to identify more important parts of the audio, allowing the air interface layer to prioritize and better protect these parts of the signal. GSM was further enhanced in 1997
with the enhanced full rate (EFR) codec, a 12.2 kbit/s codec that uses a full-rate channel. Finally, with the development of UMTS, EFR was refactored into a variable-rate codec called AMR-Narrowband, which is high quality and robust against interference when used on full-rate channels, or less robust but still relatively high quality when used in good radio conditions on half-rate channel.
Subscriber Identity Module (SIM)
One of the key features of GSM is the Subscriber Identity Module, commonly known as a SIM card. The SIM is a detachable smart card containing a user's subscription information and phone book. This allows users to retain their information after switching handsets. Alternatively, users can change networks or network identities without switching handsets - simply by changing the SIM.
Phone locking
Sometimes mobile network operators restrict handsets that they sell for exclusive use in their own network. This is called SIM locking and is implemented by a software feature of the phone. A subscriber may usually contact the provider to remove the lock for a fee, utilize private services to remove the lock, or use software and websites to unlock the handset themselves. It is possible to hack past a phone locked by a network operator.
In some countries and regions (e.g. Brazil and Germany) all phones are sold unlocked due to the abundance of dual-SIM handsets and operators.
GSM security
GSM was intended to be a secure wireless system. It has considered the user authentication using a pre-shared key and challenge–response, and over-the-air encryption. However, GSM is vulnerable to different types of attack, each of them aimed at a different part of the network.
Research findings indicate that GSM faces susceptibility to hacking by script kiddies, a term referring to inexperienced individuals utilizing readily available hardware and software. The vulnerability arises from the accessibility of tools such as a DVB-T TV tuner, posing a threat to both mobile and network users. Despite the term "script kiddies" implying a lack of sophisticated skills, the consequences of their attacks on GSM can be severe, impacting the functionality of cellular networks. Given that GSM continues to be the main source of cellular technology in numerous countries, its susceptibility to potential threats from malicious attacks is one that needs to be addressed.
The development of UMTS introduced an optional Universal Subscriber Identity Module (USIM), that uses a longer authentication key to give greater security, as well as mutually authenticating the network and the user, whereas GSM only authenticates the user to the network (and not vice versa). The security model therefore offers confidentiality and authentication, but limited authorization capabilities, and no non-repudiation.
GSM uses several cryptographic algorithms for security. The A5/1, A5/2, and A5/3 stream ciphers are used for ensuring over-the-air voice privacy. A5/1 was developed first and is a stronger algorithm used within Europe and the United States; A5/2 is weaker and used in other countries. Serious weaknesses have been found in both algorithms: it is possible to break A5/2 in real-time with a ciphertext-only attack, and in January 2007, The Hacker's Choice started the A5/1 cracking project with plans to use FPGAs that allow A5/1 to be broken with a rainbow table attack. The system supports multiple algorithms so operators may replace that cipher with a stronger one.
Since 2000, different efforts have been made in order to crack the A5 encryption algorithms. Both A5/1 and A5/2 algorithms have been broken, and their cryptanalysis has been revealed in the literature. As an example, Karsten Nohl developed a number of rainbow tables (static values which reduce the time needed to carry out an attack) and have found new sources for known plaintext attacks. He said that it is possible to build "a full GSM interceptor... from open-source components" but that they had not done so because of legal concerns. Nohl claimed that he was able to intercept voice and text conversations by impersonating another user to listen to voicemail, make calls, or send text messages using a seven-year-old Motorola cellphone and decryption software available for free online.
GSM uses General Packet Radio Service (GPRS) for data transmissions like browsing the web. The most commonly deployed GPRS ciphers were publicly broken in 2011.
The researchers revealed flaws in the commonly used GEA/1 and GEA/2 (standing for GPRS Encryption Algorithms 1 and 2) ciphers and published the open-source "gprsdecode" software for sniffing GPRS networks. They also noted that some carriers do not encrypt the data (i.e., using GEA/0) in order to detect the use of traffic or protocols they do not like (e.g., Skype), leaving customers unprotected. GEA/3 seems to remain relatively hard to break and is said to be in use on some more modern networks. If used with USIM to prevent connections to fake base stations and downgrade attacks, users will be protected in the medium term, though migration to 128-bit GEA/4 is still recommended.
The first public cryptanalysis of GEA/1 and GEA/2 (also written GEA-1 and GEA-2) was done in 2021. It concluded that although using a 64-bit key, the GEA-1 algorithm actually provides only 40 bits of security, due to a relationship between two parts of the algorithm. The researchers found that this relationship was very unlikely to have happened if it was not intentional. This may have been done in order to satisfy European controls on export of cryptographic programs.
Standards information
The GSM systems and services are described in a set of standards governed by ETSI, where a full list is maintained.
GSM open-source software
Several open-source software projects exist that provide certain GSM features:
gsmd daemon by Openmoko
OpenBTS develops a Base transceiver station
The GSM Software Project aims to build a GSM analyzer for less than $1,000
OsmocomBB developers intend to replace the proprietary baseband GSM stack with a free software implementation
YateBTS develops a Base transceiver station
Issues with patents and open source
Patents remain a problem for any open-source GSM implementation, because it is not possible for GNU or any other free software distributor to guarantee immunity from all lawsuits by the patent holders against the users. Furthermore, new features are being added to the standard all the time which means they have patent protection for a number of years.
The original GSM implementations from 1991 may now be entirely free of patent encumbrances, however patent freedom is not certain due to the United States' "first to invent" system that was in place until 2012. The "first to invent" system, coupled with "patent term adjustment" can extend the life of a U.S. patent far beyond 20 years from its priority date. It is unclear at this time whether OpenBTS will be able to implement features of that initial specification without limit. As patents subsequently expire, however, those features can be added into the open-source version. , there have been no lawsuits against users of OpenBTS over GSM use.
| Technology | Networks | null |
12821 | https://en.wikipedia.org/wiki/Gate | Gate | A gate or gateway is a point of entry to or from a space enclosed by walls. The word derived from old Norse "gat" meaning road or path; But other terms include yett and port. The concept originally referred to the gap or hole in the wall or fence, rather than a barrier which closed it. Gates may prevent or control the entry or exit of individuals, or they may be merely decorative. The moving part or parts of a gateway may be considered "doors", as they are fixed at one side whilst opening and closing like one.
A gate may have a latch that can be raised and lowered to both open a gate or prevent it from swinging. Gate operation can be either automated or manual. Locks are also used on gates to increase security.
Larger gates can be used for a whole building, such as a castle or fortified town. Doors can also be considered gates when they are used to block entry as prevalent within a gatehouse.
Purpose-specific types of gate
Baby gate: a safety gate to protect babies and toddlers
Badger gate: gate to allow badgers to pass through rabbit-proof fencing
City gate of a walled city
Hampshire gate (a.k.a. New Zealand gate, wire gate, etc.)
Kissing gate on a footpath
Lychgate with a roof
Mon Japanese: gate. The religious torii compares to the Chinese pailou (paifang), Indian torana, Indonesian Paduraksa and Korean hongsalmun. Mon are widespread, in Japanese gardens.
Portcullis of a castle
Race gate used for checkpoints on race tracks.
Slip gate on footpaths
Turnstile
Watergate of a castle by navigable water
Slalom skiing gates
Wicket gate
Image gallery
| Technology | Architectural elements | null |
12822 | https://en.wikipedia.org/wiki/Greek%20fire | Greek fire | Greek fire was an incendiary weapon system used by the Byzantine Empire from the seventh to the fourteenth centuries. The recipe for Greek fire was a closely-guarded state secret; historians have variously speculated that it was based on saltpeter, sulfur, or quicklime, but most modern scholars agree that it was based on petroleum mixed with resins, comparable in composition to modern napalm. Byzantine sailors would toss grenades loaded with Greek fire onto enemy ships or spray it from tubes. Its ability to burn on water made it an effective and destructive naval incendiary weapon, and rival powers tried unsuccessfully to copy the material.
Name
Usage of the term "Greek fire" has been general in English and most other languages since the Crusades. Original Byzantine sources called the substance a variety of names, such as "sea fire" (Medieval Greek: ), "Roman fire" ( ), "war fire" ( ), "liquid fire" ( ), "sticky fire" ( ), or "manufactured fire" ( ).
History
Incendiary and flaming weapons were used in warfare for centuries before Greek fire was invented. They included sulfur-, petroleum-, and bitumen-based mixtures. Incendiary arrows and pots or small pouches containing combustible substances surrounded by caltrops or spikes, or launched by catapults, were used by the 9th century BC by the Assyrians and were extensively used in the Greco-Roman world as well. Thucydides mentions that in the siege of Delium in 424 BC a long tube on wheels was used which blew flames forward using a large bellows. The Graeco-Roman treatise , compiled in the late 2nd or early 3rd century AD and traditionally ascribed to Julius Africanus, records a mixture that ignited from adequate heat and intense sunlight, used in grenades or night attacks:
In naval warfare, the Byzantine emperor Anastasius I () is recorded by chronicler John Malalas to have been advised by a philosopher from Athens called Proclus to use sulfur to burn the ships of the rebel general Vitalian.
Greek fire proper was developed in and is ascribed by the chronicler Theophanes the Confessor to Kallinikos (Latinized Callinicus), a Jewish architect from Heliopolis, in Syria, by then overrun by the Muslim conquests:
The accuracy and exact chronology of this account is open to question: elsewhere, Theophanes reports the use of fire-carrying ships equipped with nozzles () by the Byzantines a couple of years before the supposed arrival of Kallinikos at Constantinople. If this is not due to chronological confusion of the events of the siege, it may suggest that Kallinikos introduced an improved version of an established weapon. The historian James Partington thinks it likely that Greek fire was not the creation of any single person but "invented by chemists in Constantinople who had inherited the discoveries of the Alexandrian chemical school". The 11th-century chronicler George Kedrenos records that Kallinikos came from Heliopolis in Egypt, but most scholars reject this as an error. Kedrenos also records the story, considered implausible by modern scholars, that Kallinikos' descendants, a family called , "brilliant", kept the secret of the fire's manufacture and continued doing so to Kedrenos' time.
Kallinikos' development of Greek fire came at a critical moment in the Byzantine Empire's history: weakened by its long wars with Sassanid Persia, the Byzantines had been unable to effectively resist the onslaught of the Muslim conquests. Within a generation, Syria, Palestine, and Egypt had fallen to the Arabs, who in set out to conquer the imperial capital of Constantinople. Greek fire was used to great effect against the Muslim fleets, helping to repel the Muslims at the first and second Arab sieges of the city. Records of its use in later naval battles against the Saracens are more sporadic, but it secured victories during the Byzantine expansion in the late 9th and early 10th centuries. Use of the substance was prominent in Byzantine civil wars, chiefly the revolt of the thematic fleets in 727 and the large-scale rebellion led by Thomas the Slav in 821–823. In both cases, the rebel fleets were defeated by the Constantinople-based central Imperial fleet through the use of Greek fire. The Byzantines also used the weapon to devastating effect against the various Rus' raids on the Bosporus, especially those of 941 and 1043, as well as during the Bulgarian war of 970–971, when the fire-carrying Byzantine ships blockaded the Danube.
The importance placed on Greek fire during the Empire's struggle against the Arabs led to its discovery being ascribed to divine intervention. The Emperor Constantine Porphyrogennetos (), in his book , admonishes his son and heir, Romanos II (), never to reveal the secrets of its composition, as it was "shown and revealed by an angel to the great and holy first Christian emperor Constantine" and that the angel bound him "not to prepare this fire but for Christians, and only in the imperial city". As a warning, he adds that one official, who was bribed into handing some of it over to the Empire's enemies, was struck down by a "flame from heaven" as he was about to enter a church. As the latter incident demonstrates, the Byzantines could not avoid capture of their secret weapon: the Arabs captured at least one fireship intact in 827, and the Bulgars captured several s and much of the substance itself in 812/814. This was apparently not enough to allow their enemies to copy it (see below). The Arabs used various incendiary substances similar to the Byzantine weapon, but were never able to copy the Byzantine method of deployment by , and used catapults and grenades instead.
Greek fire continued to be mentioned during the 12th century, and Anna Komnene gives a vivid description of its use in a naval battle against the Pisans in 1099. The use of hastily improvised fireships is mentioned during the 1203 siege of Constantinople by the Fourth Crusade, but no report confirms the use of Greek fire. This might be because of the general disarmament of the Empire in the 20 years leading up to the sacking, or because the Byzantines had lost access to the areas where the primary ingredients were to be found, or even perhaps because the secret had been lost over time.
Records of a 13th-century use of "Greek fire" by the Saracens against the Crusaders can be read through the Memoirs of the Lord of Joinville during the Seventh Crusade. One description of the memoir says "the tail of fire that trailed behind it was as big as a great spear; and it made such a noise as it came, that it sounded like the thunder of heaven. It looked like a dragon flying through the air. Such a bright light did it cast, that one could see all over the camp as though it were day, by reason of the great mass of fire, and the brilliance of the light that it shed."
In the 19th century, it is reported that an Armenian called Kavafian approached the government of the Ottoman Empire with a new type of Greek fire he claimed to have developed. Kavafian refused to reveal its composition when asked by the government, insisting that he be placed in command of its use during naval engagements. Not long after this, he was poisoned by imperial authorities, without their ever having found out his secret.
Manufacture
General characteristics
As Constantine Porphyrogennetos' warnings show, the ingredients and the processes of manufacture and deployment of Greek fire were carefully guarded military secrets. So strict was the secrecy that the composition of Greek fire was lost forever and remains a source of speculation. The mystery of the formula has long dominated the research into Greek fire. Despite this almost exclusive focus, Greek fire is best understood as a complete weapon system of many components, all of which were needed to operate together to render it effective. This comprised not only the formula of its composition, but also the specialized dromon ships that carried it into battle, the device used to prepare the substance by heating and pressurizing it, the
projecting it, and the special training of the who used it. Knowledge of the whole system was highly compartmentalised, with operators and technicians aware of the secrets of only one component, ensuring that no enemy could gain knowledge of it in its entirety. This accounts for the fact that when the Bulgarians took Mesembria and Debeltos in 814, they captured 36 s and even quantities of the substance itself, but were unable to make any use of them.
The information available on Greek fire is indirect, based on references in the Byzantine military manuals and secondary historical sources such as Anna Komnene and Western European chroniclers, which are often inaccurate. In her Alexiad, Anna Komnene provides a description of an incendiary weapon, which was used by the Byzantine garrison of Dyrrhachium in 1108 against the Normans. It is often regarded as an at least partial "recipe" for Greek fire:
At the same time, the reports by Western chroniclers of the famed are largely unreliable, since they apply the name to all incendiary substances.
In attempting to reconstruct the Greek fire system, the evidence from the contemporary literary references provides the following characteristics:
It burned on water; according to some interpretations it was ignited by water. Numerous writers testify that it could be extinguished only by a few substances, such as sand, strong vinegar, or old urine, some presumably by a sort of chemical reaction.
It was a liquid substance – not some sort of projectile – as verified both by descriptions and the name "liquid fire".
At sea it was usually ejected from a , but earthenware pots or grenades filled with it – or similar substances – were also used.
The discharge of Greek fire was accompanied by "thunder" and "much smoke".
Theories on composition
The first and, for a long time, most popular theory regarding the composition of Greek fire held that its chief ingredient was saltpeter, making it an early form of gunpowder. This argument was based on the "thunder and smoke" description, as well as on the distance the flame could be projected from the , which suggested an explosive discharge. From the times of Isaac Vossius, several scholars adhered to this position, most notably the so-called "French school" during the 19th century, which included chemist Marcellin Berthelot.
This view has subsequently been rejected, since saltpeter does not appear to have been used in warfare in Europe or the Middle East before the 13th century, and is absent from the accounts of the Muslim writers – the foremost chemists of the early medieval world – before the same period. In addition, the behavior of the suggested mixture would have been very different from the -projected substance described by Byzantine sources.
A second view, based on the fact that Greek fire was inextinguishable by water (some sources suggest that water intensified the flames), suggested that its destructive power was the result of the explosive reaction between water and quicklime. Although quicklime was known and used by the Byzantines and the Arabs in warfare, the theory is refuted by literary and empirical evidence. A quicklime-based substance would have to come in contact with water to ignite, while Emperor Leo's indicates that Greek fire was often poured directly onto the decks of enemy ships, although admittedly, decks were kept wet due to lack of sealants. Likewise, Leo describes the use of grenades, which further reinforces the view that contact with water was not necessary for the substance's ignition. Zenghelis (1932) pointed out that, based on experiments, the result of the water–quicklime reaction would be negligible in the open sea.
Another similar proposition suggested that Kallinikos had discovered calcium phosphide, which can be made by boiling bones in urine in a sealed vessel. On contact with water it releases phosphine, which ignites spontaneously. Extensive experiments with calcium phosphide also failed to reproduce the described intensity of Greek fire.
Consequently, although the presence of either quicklime or saltpeter in the mixture cannot be entirely excluded, they were not the primary ingredient. Most modern scholars agree that Greek fire was based on either crude or refined petroleum, comparable to modern napalm. The Byzantines had easy access to crude oil from the naturally occurring wells around the Black Sea (e.g., the wells around Tmutorakan noted by Constantine Porphyrogennetos) or in various locations throughout the Middle East. An alternate name for Greek fire was "Median fire" (), and the 6th-century historian Procopius records that crude oil, called "naphtha" (in Greek: , from Old Persian ) by the Persians, was known to the Greeks as "Median oil" (). This seems to corroborate the availability of naphtha as a basic ingredient of Greek fire.
Naphtha was also used by the Abbasids in the 9th century, with special troops, the , who wore thick protective suits and used small copper vessels containing burning oil, which they threw onto the enemy troops. There is also a surviving 9th-century Latin text, preserved at Wolfenbüttel in Germany, which mentions the ingredients of what appears to be Greek fire and the operation of the s used to project it. Although the text contains some inaccuracies, it identifies the main component as naphtha. Resins were probably added as a thickener (the refer to the substance as , "sticky fire"), and to increase the duration and intensity of the flame. A modern theoretical concoction included the use of pine tar and animal fat.
A 12th-century treatise prepared by Mardi bin Ali al-Tarsusi for Saladin records an Arab version of Greek fire, called , which also had a petroleum base, with sulfur and various resins added. Any direct relation with the Byzantine formula is unlikely. An Italian recipe from the 16th century has been recorded for recreational use; it includes charcoal from a willow tree, saltpeter (), alcohol, sulfur, incense, tar (), wool, and camphor; the concoction was guaranteed to "burn under water" and to be "beautiful".
Methods of deployment
The chief method of deployment of Greek fire, which sets it apart from similar substances, was its projection through a tube (siphōn), for use aboard ships or in sieges. Portable projectors (cheirosiphōnes, χειροσίφωνες) were also invented, reputedly by Emperor Leo VI. The Byzantine military manuals also mention that jars (chytrai or tzykalia) filled with Greek fire and caltrops wrapped with tow and soaked in the substance were thrown by catapults, while pivoting cranes (gerania) were employed to pour it upon enemy ships. The cheirosiphōnes especially were prescribed for use at land and in sieges, both against siege machines and against defenders on the walls, by several 10th-century military authors, and their use is depicted in the Poliorcetica of Hero of Byzantium. The Byzantine dromons usually had a siphōn installed on their prow under the forecastle, but additional devices could also be placed elsewhere on the ship. Thus in 941, when the Byzantines were facing the vastly more numerous Rus' fleet, siphōns were placed also amidships and even astern.
Projectors
The use of tubular projectors (σίφων, siphōn) is amply attested in the contemporary sources. Anna Komnene gives this account of beast-shaped Greek fire projectors being mounted to the bow of warships:
As he [the Emperor Alexios I] knew that the Pisans were skilled in sea warfare and dreaded a battle with them, on the prow of each ship he had a head fixed of a lion or other land-animal, made in brass or iron with the mouth open and then gilded over, so that their mere aspect was terrifying. And the fire which was to be directed against the enemy through tubes he made to pass through the mouths of the beasts, so that it seemed as if the lions and the other similar monsters were vomiting the fire.
Some sources provide more information on the composition and function of the whole mechanism. The Wolfenbüttel manuscript provides the following description:
...having built a furnace right at the front of the ship, they set on it a copper vessel full of these things, having put fire underneath. And one of them, having made a bronze tube similar to that which the rustics call a squitiatoria, "squirt," with which boys play, they spray [it] at the enemy.
Another, possibly first-hand, account of the use of Greek fire comes from the 11th-century Yngvars saga víðförla, in which the Viking Ingvar the Far-Travelled faces ships equipped with Greek fire weapons:
[They] began blowing with smiths’ bellows at a furnace in which there was fire and there came from it a great din. There stood there also a brass [or bronze] tube and from it flew much fire against one ship, and it burned up in a short time so that all of it became white ashes...
The account, albeit embellished, corresponds with many of the characteristics of Greek fire known from other sources, such as a loud roar that accompanied its discharge. These two texts are also the only two sources that explicitly mention that the substance was heated over a furnace before being discharged; although the validity of this information is open to question, modern reconstructions have relied upon them.
Based on these descriptions and the Byzantine sources, John Haldon and Maurice Byrne designed a hypothetical apparatus as consisting of three main components: a bronze pump, which was used to pressurize the oil; a brazier, used to heat the oil (πρόπυρον, propyron, "pre-heater"); and the nozzle, which was covered in bronze and mounted on a swivel (στρεπτόν, strepton). The brazier, burning a match of linen or flax that produced intense heat and the characteristic thick smoke, was used to heat oil and the other ingredients in an airtight tank above it, a process that also helped to dissolve the resins into a fluid mixture. The substance was pressurized by the heat and the use of a force pump. After it had reached the proper pressure, a valve connecting the tank with the swivel was opened and the mixture was discharged from its end, being ignited at its mouth by a flame. The intense heat of the flame made necessary the presence of heat shields made of iron (βουκόλια, boukolia), which are attested in the fleet inventories.
The process of operating Haldon and Byrne's design was fraught with danger, as the mounting pressure could easily make the heated oil tank explode, a flaw which was not recorded as a problem with the historical fire weapon. In the experiments conducted by Haldon in 2002 for the episode "Fireship" of the television series Machines Times Forgot, even modern welding techniques failed to secure adequate insulation of the bronze tank under pressure. This led to the relocation of the pressure pump between the tank and the nozzle. The full-scale device built on this basis established the effectiveness of the mechanism's design, even with the simple materials and techniques available to the Byzantines. The experiment used crude oil mixed with wood resins, and achieved a flame temperature of over and an effective range of up to .
Hand-held projectors
The portable cheirosiphōn ("hand-siphōn"), the earliest analogue to a modern flamethrower, is extensively attested in the military documents of the 10th century, and recommended for use in both sea and land. They first appear in the Tactica of emperor Leo VI the Wise, who claims to have invented them. Subsequent authors continued to refer to the cheirosiphōnes, especially for use against siege towers; Nikephoros II Phokas also advises their use in field armies, with the aim of disrupting the enemy formation. Although both Leo VI and Nikephoros Phokas claim that the substance used in the cheirosiphōnes was the same as in the static devices used on ships, Haldon and Byrne consider that the former were manifestly different from their larger cousins, and theorize that the device was fundamentally different, "a simple syringe [that] squirted both liquid fire (presumably unignited) and noxious juices to repel enemy troops." The illustrations of Hero's Poliorcetica show the cheirosiphōn also throwing the ignited substance.
Grenades
In its earliest form, Greek fire was hurled onto enemy forces by firing a burning cloth-wrapped ball, perhaps containing a flask, using a form of light catapult, most probably a seaborne variant of the Roman light catapult or onager. These were capable of hurling loads of around a distance of .
Effectiveness and countermeasures
Although the destructiveness of Greek fire is indisputable, it did not make the Byzantine navy invincible. It was not, in the words of naval historian John Pryor, a "ship-killer" comparable to the naval ram, which, by then, had fallen out of use. While Greek fire remained a potent weapon, its limitations were significant when compared to more traditional forms of artillery: in its siphōn-deployed version, it had a limited range, and it could be used safely only in a calm sea and with favorable wind conditions.
The Muslim navies eventually adapted themselves to it by staying out of its effective range and devising methods of protection such as felt or hides soaked in vinegar.
Nevertheless, it was still a decisive weapon in many battles. John Julius Norwich wrote: "It is impossible to exaggerate the importance of Greek fire in Byzantine history."
In literature
In William Golding's 1958 play The Brass Butterfly, adapted from his novella Envoy Extraordinary, the Greek inventor Phanocles demonstrates explosives to the Roman Emperor. The Emperor decides that his empire is not ready for this or for Phanocles's other inventions and sends him on "a slow boat to China".
In Victor Canning's stage play Honour Bright (1960), the crusader Godfrey of Ware returns with a casket of Greek Fire given to him by an old man in Athens.
In Rick Riordan's Greek storyline, Greek Fire is described as being a volatile green liquid. When it explodes, all of the substance is spread out over an area and burns continuously. It is very strong and dangerous.
In C. J. Sansom's historical mystery novel Dark Fire, Thomas Cromwell sends the lawyer Matthew Shardlake to recover the secret of Greek fire, following its discovery in the library of a dissolved London monastery.
In Michael Crichton's sci-fi novel Timeline, Professor Edward Johnston is stuck in the past in 14th-century Europe, and claims to have knowledge of Greek fire.
In Mika Waltari's novel The Dark Angel, some old men who are the last ones who know the secret of Greek fire are mentioned as present in the last Christian services held in Hagia Sophia before the Fall of Constantinople. The narrator is told that in the event of the city's fall, they will be killed so as to keep the secret from the Turks.
In George R. R. Martin's fantasy series of novels A Song of Ice and Fire, and its television adaptation Game of Thrones, wildfire is similar to Greek fire. It was used in naval battles as it could remain lit on water, and its recipe was closely guarded.
In popular culture
Greek fire was used by Blackbeard's ship, the Queen Anne's Revenge, in the 2011 film Pirates of the Caribbean: On Stranger Tides.
An application of Greek fire is shown in the 2011 Ubisoft video game Assassin's Creed: Revelations when the main character, Ezio Auditore, escapes from the port of Istanbul using a hand projector located on an Ottoman ship.
| Technology | Incendiary weapons | null |
12832 | https://en.wikipedia.org/wiki/G%20protein-coupled%20receptor | G protein-coupled receptor | G protein-coupled receptors (GPCRs), also known as seven-(pass)-transmembrane domain receptors, 7TM receptors, heptahelical receptors, serpentine receptors, and G protein-linked receptors (GPLR), form a large group of evolutionarily related proteins that are cell surface receptors that detect molecules outside the cell and activate cellular responses. They are coupled with G proteins. They pass through the cell membrane seven times in the form of six loops (three extracellular loops interacting with ligand molecules, three intracellular loops interacting with G proteins, an N-terminal extracellular region and a C-terminal intracellular region) of amino acid residues, which is why they are sometimes referred to as seven-transmembrane receptors. Ligands can bind either to the extracellular N-terminus and loops (e.g. glutamate receptors) or to the binding site within transmembrane helices (rhodopsin-like family). They are all activated by agonists, although a spontaneous auto-activation of an empty receptor has also been observed.
G protein-coupled receptors are found only in eukaryotes, including yeast, and choanoflagellates. The ligands that bind and activate these receptors include light-sensitive compounds, odors, pheromones, hormones, and neurotransmitters, and vary in size from small molecules to peptides to large proteins. G protein-coupled receptors are involved in many diseases.
There are two principal signal transduction pathways involving the G protein-coupled receptors:
the cAMP signal pathway and
the phosphatidylinositol signal pathway.
When a ligand binds to the GPCR it causes a conformational change in the GPCR, which allows it to act as a guanine nucleotide exchange factor (GEF). The GPCR can then activate an associated G protein by exchanging the GDP bound to the G protein for a GTP. The G protein's α subunit, together with the bound GTP, can then dissociate from the β and γ subunits to further affect intracellular signaling proteins or target functional proteins directly depending on the α subunit type (Gαs, Gαi/o, Gαq/11, Gα12/13).
GPCRs are an important drug target and approximately 34% of all Food and Drug Administration (FDA) approved drugs target 108 members of this family. The global sales volume for these drugs is estimated to be 180 billion US dollars . It is estimated that GPCRs are targets for about 50% of drugs currently on the market, mainly due to their involvement in signaling pathways related to many diseases i.e. mental, metabolic including endocrinological disorders, immunological including viral infections, cardiovascular, inflammatory, senses disorders, and cancer. The long ago discovered association between GPCRs and many endogenous and exogenous substances, resulting in e.g. analgesia, is another dynamically developing field of the pharmaceutical research.
History and significance
With the determination of the first structure of the complex between a G-protein coupled receptor (GPCR) and a G-protein trimer (Gαβγ) in 2011 a new chapter of GPCR research was opened for structural investigations of global switches with more than one protein being investigated. The previous breakthroughs involved determination of the crystal structure of the first GPCR, rhodopsin, in 2000 and the crystal structure of the first GPCR with a diffusible ligand (β2AR) in 2007. The way in which the seven transmembrane helices of a GPCR are arranged into a bundle was suspected based on the low-resolution model of frog rhodopsin from cryogenic electron microscopy studies of the two-dimensional crystals. The crystal structure of rhodopsin, that came up three years later, was not a surprise apart from the presence of an additional cytoplasmic helix H8 and a precise location of a loop covering retinal binding site. However, it provided a scaffold which was hoped to be a universal template for homology modeling and drug design for other GPCRs – a notion that proved to be too optimistic.
Results 7 years later were surprising because the crystallization of β2-adrenergic receptor (β2AR) with a diffusible ligand revealed quite a different shape of the receptor extracellular side than that of rhodopsin. This area is important because it is responsible for the ligand binding and is targeted by many drugs. Moreover, the ligand binding site was much more spacious than in the rhodopsin structure and was open to the exterior. In the other receptors crystallized shortly afterwards the binding side was even more easily accessible to the ligand. New structures complemented with biochemical investigations uncovered mechanisms of action of molecular switches which modulate the structure of the receptor leading to activation states for agonists or to complete or partial inactivation states for inverse agonists.
The 2012 Nobel Prize in Chemistry was awarded to Brian Kobilka and Robert Lefkowitz for their work that was "crucial for understanding how G protein-coupled receptors function". There have been at least seven other Nobel Prizes awarded for some aspect of G protein–mediated signaling. As of 2012, two of the top ten global best-selling drugs (Advair Diskus and Abilify) act by targeting G protein-coupled receptors.
Classification
The exact size of the GPCR superfamily is unknown, but at least 831 different human genes (or about 4% of the entire protein-coding genome) have been predicted to code for them from genome sequence analysis. Although numerous classification schemes have been proposed, the superfamily was classically divided into three main classes (A, B, and C) with no detectable shared sequence homology between classes.
The largest class by far is class A, which accounts for nearly 85% of the GPCR genes. Of class A GPCRs, over half of these are predicted to encode olfactory receptors, while the remaining receptors are liganded by known endogenous compounds or are classified as orphan receptors. Despite the lack of sequence homology between classes, all GPCRs have a common structure and mechanism of signal transduction. The very large rhodopsin A group has been further subdivided into 19 subgroups (A1-A19).
According to the classical A-F system, GPCRs can be grouped into six classes based on sequence homology and functional similarity:
Class A (or 1) (Rhodopsin-like)
Class B (or 2) (Secretin receptor family)
Class C (or 3) (Metabotropic glutamate/pheromone)
Class D (or 4) (Fungal mating pheromone receptors)
Class E (or 5) (Cyclic AMP receptors)
Class F (or 6) (Frizzled/Smoothened)
More recently, an alternative classification system called GRAFS (Glutamate, Rhodopsin, Adhesion, Frizzled/Taste2, Secretin) has been proposed for vertebrate GPCRs. They correspond to classical classes C, A, B2, F, and B.
An early study based on available DNA sequence suggested that the human genome encodes roughly 750 G protein-coupled receptors, about 350 of which detect hormones, growth factors, and other endogenous ligands. Approximately 150 of the GPCRs found in the human genome have unknown functions.
Some web-servers and bioinformatics prediction methods have been used for predicting the classification of GPCRs according to their amino acid sequence alone, by means of the pseudo amino acid composition approach.
Physiological roles
GPCRs are involved in a wide variety of physiological processes. Some examples of their physiological roles include:
The visual sense: The opsins use a photoisomerization reaction to translate electromagnetic radiation into cellular signals. Rhodopsin, for example, uses the conversion of 11-cis-retinal to all-trans-retinal for this purpose.
The gustatory sense (taste): GPCRs in taste cells mediate release of gustducin in response to bitter-, umami- and sweet-tasting substances.
The sense of smell: Receptors of the olfactory epithelium bind odorants (olfactory receptors) and pheromones (vomeronasal receptors)
Behavioral and mood regulation: Receptors in the mammalian brain bind several different neurotransmitters, including serotonin, dopamine, histamine, GABA, and glutamate
Regulation of immune system activity and inflammation: chemokine receptors bind ligands that mediate intercellular communication between cells of the immune system; receptors such as histamine receptors bind inflammatory mediators and engage target cell types in the inflammatory response. GPCRs are also involved in immune-modulation, e. g. regulating interleukin induction or suppressing TLR-induced immune responses from T cells.
Autonomic nervous system transmission: Both the sympathetic and parasympathetic nervous systems are regulated by GPCR pathways, responsible for control of many automatic functions of the body such as blood pressure, heart rate, and digestive processes
Cell density sensing: A novel GPCR role in regulating cell density sensing.
Homeostasis modulation (e.g., water balance).
Involved in growth and metastasis of some types of tumors.
Used in the endocrine system for peptide and amino-acid derivative hormones that bind to GCPRs on the cell membrane of a target cell. This activates cAMP, which in turn activates several kinases, allowing for a cellular response, such as transcription.
Receptor structure
GPCRs are integral membrane proteins that possess seven membrane-spanning domains or transmembrane helices. The extracellular parts of the receptor can be glycosylated. These extracellular loops also contain two highly conserved cysteine residues that form disulfide bonds to stabilize the receptor structure. Some seven-transmembrane helix proteins (channelrhodopsin) that resemble GPCRs may contain ion channels, within their protein.
In 2000, the first crystal structure of a mammalian GPCR, that of bovine rhodopsin (), was solved. In 2007, the first structure of a human GPCR was solved This human β2-adrenergic receptor GPCR structure proved highly similar to the bovine rhodopsin. The structures of activated or agonist-bound GPCRs have also been determined. These structures indicate how ligand binding at the extracellular side of a receptor leads to conformational changes in the cytoplasmic side of the receptor. The biggest change is an outward movement of the cytoplasmic part of the 5th and 6th transmembrane helix (TM5 and TM6). The structure of activated beta-2 adrenergic receptor in complex with Gs confirmed that the Gα binds to a cavity created by this movement.
GPCRs exhibit a similar structure to some other proteins with seven transmembrane domains, such as microbial rhodopsins and adiponectin receptors 1 and 2 (ADIPOR1 and ADIPOR2). However, these 7TMH (7-transmembrane helices) receptors and channels do not associate with G proteins. In addition, ADIPOR1 and ADIPOR2 are oriented oppositely to GPCRs in the membrane (i.e. GPCRs usually have an extracellular N-terminus, cytoplasmic C-terminus, whereas ADIPORs are inverted).
Structure–function relationships
In terms of structure, GPCRs are characterized by an extracellular N-terminus, followed by seven transmembrane (7-TM) α-helices (TM-1 to TM-7) connected by three intracellular (IL-1 to IL-3) and three extracellular loops (EL-1 to EL-3), and finally an intracellular C-terminus. The GPCR arranges itself into a tertiary structure resembling a barrel, with the seven transmembrane helices forming a cavity within the plasma membrane that serves a ligand-binding domain that is often covered by EL-2. Ligands may also bind elsewhere, however, as is the case for bulkier ligands (e.g., proteins or large peptides), which instead interact with the extracellular loops, or, as illustrated by the class C metabotropic glutamate receptors (mGluRs), the N-terminal tail. The class C GPCRs are distinguished by their large N-terminal tail, which also contains a ligand-binding domain. Upon glutamate-binding to an mGluR, the N-terminal tail undergoes a conformational change that leads to its interaction with the residues of the extracellular loops and TM domains. The eventual effect of all three types of agonist-induced activation is a change in the relative orientations of the TM helices (likened to a twisting motion) leading to a wider intracellular surface and "revelation" of residues of the intracellular helices and TM domains crucial to signal transduction function (i.e., G-protein coupling). Inverse agonists and antagonists may also bind to a number of different sites, but the eventual effect must be prevention of this TM helix reorientation.
The structure of the N- and C-terminal tails of GPCRs may also serve important functions beyond ligand-binding. For example, The C-terminus of M3 muscarinic receptors is sufficient, and the six-amino-acid polybasic (KKKRRK) domain in the C-terminus is necessary for its preassembly with Gq proteins. In particular, the C-terminus often contains serine (Ser) or threonine (Thr) residues that, when phosphorylated, increase the affinity of the intracellular surface for the binding of scaffolding proteins called β-arrestins (β-arr). Once bound, β-arrestins both sterically prevent G-protein coupling and may recruit other proteins, leading to the creation of signaling complexes involved in extracellular-signal regulated kinase (ERK) pathway activation or receptor endocytosis (internalization). As the phosphorylation of these Ser and Thr residues often occurs as a result of GPCR activation, the β-arr-mediated G-protein-decoupling and internalization of GPCRs are important mechanisms of desensitization. In addition, internalized "mega-complexes" consisting of a single GPCR, β-arr(in the tail conformation), and heterotrimeric G protein exist and may account for protein signaling from endosomes.
A final common structural theme among GPCRs is palmitoylation of one or more sites of the C-terminal tail or the intracellular loops. Palmitoylation is the covalent modification of cysteine (Cys) residues via addition of hydrophobic acyl groups, and has the effect of targeting the receptor to cholesterol- and sphingolipid-rich microdomains of the plasma membrane called lipid rafts. As many of the downstream transducer and effector molecules of GPCRs (including those involved in negative feedback pathways) are also targeted to lipid rafts, this has the effect of facilitating rapid receptor signaling.
GPCRs respond to extracellular signals mediated by a huge diversity of agonists, ranging from proteins to biogenic amines to protons, but all transduce this signal via a mechanism of G-protein coupling. This is made possible by a guanine-nucleotide exchange factor (GEF) domain primarily formed by a combination of IL-2 and IL-3 along with adjacent residues of the associated TM helices.
Mechanism
The G protein-coupled receptor is activated by an external signal in the form of a ligand or other signal mediator. This creates a conformational change in the receptor, causing activation of a G protein. Further effect depends on the type of G protein. G proteins are subsequently inactivated by GTPase activating proteins, known as RGS proteins.
Ligand binding
GPCRs include one or more receptors for the following ligands:
sensory signal mediators (e.g., light and olfactory stimulatory molecules);
adenosine, bombesin, bradykinin, endothelin, γ-aminobutyric acid (GABA), hepatocyte growth factor (HGF), melanocortins, neuropeptide Y, opioid peptides, opsins, somatostatin, GH, tachykinins, members of the vasoactive intestinal peptide family, and vasopressin;
biogenic amines (e.g., dopamine, epinephrine, norepinephrine, histamine, serotonin, and melatonin);
glutamate (metabotropic effect);
glucagon;
acetylcholine (muscarinic effect);
chemokines;
lipid mediators of inflammation (e.g., prostaglandins, prostanoids, platelet-activating factor, and leukotrienes);
peptide hormones (e.g., calcitonin, C5a anaphylatoxin, follicle-stimulating hormone [FSH], gonadotropin-releasing hormone [GnRH], neurokinin, thyrotropin-releasing hormone [TRH], and oxytocin);
and endocannabinoids.
GPCRs that act as receptors for stimuli that have not yet been identified are known as orphan receptors.
However, in contrast to other types of receptors that have been studied, wherein ligands bind externally to the membrane, the ligands of GPCRs typically bind within the transmembrane domain. However, protease-activated receptors are activated by cleavage of part of their extracellular domain.
Conformational change
The transduction of the signal through the membrane by the receptor is not completely understood. It is known that in the inactive state, the GPCR is bound to a heterotrimeric G protein complex. Binding of an agonist to the GPCR results in a conformational change in the receptor that is transmitted to the bound Gα subunit of the heterotrimeric G protein via protein domain dynamics. The activated Gα subunit exchanges GTP in place of GDP which in turn triggers the dissociation of Gα subunit from the Gβγ dimer and from the receptor. The dissociated Gα and Gβγ subunits interact with other intracellular proteins to continue the signal transduction cascade while the freed GPCR is able to rebind to another heterotrimeric G protein to form a new complex that is ready to initiate another round of signal transduction.
It is believed that a receptor molecule exists in a conformational equilibrium between active and inactive biophysical states. The binding of ligands to the receptor may shift the equilibrium toward the active receptor states. Three types of ligands exist: Agonists are ligands that shift the equilibrium in favour of active states; inverse agonists are ligands that shift the equilibrium in favour of inactive states; and neutral antagonists are ligands that do not affect the equilibrium. It is not yet known how exactly the active and inactive states differ from each other.
G-protein activation/deactivation cycle
When the receptor is inactive, the GEF domain may be bound to an also inactive α-subunit of a heterotrimeric G-protein. These "G-proteins" are a trimer of α, β, and γ subunits (known as Gα, Gβ, and Gγ, respectively) that is rendered inactive when reversibly bound to Guanosine diphosphate (GDP) (or, alternatively, no guanine nucleotide) but active when bound to guanosine triphosphate (GTP). Upon receptor activation, the GEF domain, in turn, allosterically activates the G-protein by facilitating the exchange of a molecule of GDP for GTP at the G-protein's α-subunit. The cell maintains a 10:1 ratio of cytosolic GTP:GDP so exchange for GTP is ensured. At this point, the subunits of the G-protein dissociate from the receptor, as well as each other, to yield a Gα-GTP monomer and a tightly interacting Gβγ dimer, which are now free to modulate the activity of other intracellular proteins. The extent to which they may diffuse, however, is limited due to the palmitoylation of Gα and the presence of an isoprenoid moiety that has been covalently added to the C-termini of Gγ.
Because Gα also has slow GTP→GDP hydrolysis capability, the inactive form of the α-subunit (Gα-GDP) is eventually regenerated, thus allowing reassociation with a Gβγ dimer to form the "resting" G-protein, which can again bind to a GPCR and await activation. The rate of GTP hydrolysis is often accelerated due to the actions of another family of allosteric modulating proteins called regulators of G-protein signaling, or RGS proteins, which are a type of GTPase-activating protein, or GAP. In fact, many of the primary effector proteins (e.g., adenylate cyclases) that become activated/inactivated upon interaction with Gα-GTP also have GAP activity. Thus, even at this early stage in the process, GPCR-initiated signaling has the capacity for self-termination.
Crosstalk
GPCRs downstream signals have been shown to possibly interact with integrin signals, such as FAK. Integrin signaling will phosphorylate FAK, which can then decrease GPCR Gαs activity.
Signaling
If a receptor in an active state encounters a G protein, it may activate it. Some evidence suggests that receptors and G proteins are actually pre-coupled. For example, binding of G proteins to receptors affects the receptor's affinity for ligands. Activated G proteins are bound to GTP.
Further signal transduction depends on the type of G protein. The enzyme adenylate cyclase is an example of a cellular protein that can be regulated by a G protein, in this case the G protein Gs. Adenylate cyclase activity is activated when it binds to a subunit of the activated G protein. Activation of adenylate cyclase ends when the G protein returns to the GDP-bound state.
Adenylate cyclases (of which 9 membrane-bound and one cytosolic forms are known in humans) may also be activated or inhibited in other ways (e.g., Ca2+/calmodulin binding), which can modify the activity of these enzymes in an additive or synergistic fashion along with the G proteins.
The signaling pathways activated through a GPCR are limited by the primary sequence and tertiary structure of the GPCR itself but ultimately determined by the particular conformation stabilized by a particular ligand, as well as the availability of transducer molecules. Currently, GPCRs are considered to utilize two primary types of transducers: G-proteins and β-arrestins. Because β-arr's have high affinity only to the phosphorylated form of most GPCRs (see above or below), the majority of signaling is ultimately dependent upon G-protein activation. However, the possibility for interaction does allow for G-protein-independent signaling to occur.
G-protein-dependent signaling
There are three main G-protein-mediated signaling pathways, mediated by four sub-classes of G-proteins distinguished from each other by sequence homology (Gαs, Gαi/o, Gαq/11, and Gα12/13). Each sub-class of G-protein consists of multiple proteins, each the product of multiple genes or splice variations that may imbue them with differences ranging from subtle to distinct with regard to signaling properties, but in general they appear reasonably grouped into four classes. Because the signal transducing properties of the various possible βγ combinations do not appear to radically differ from one another, these classes are defined according to the isoform of their α-subunit.
While most GPCRs are capable of activating more than one Gα-subtype, they also show a preference for one subtype over another. When the subtype activated depends on the ligand that is bound to the GPCR, this is called functional selectivity (also known as agonist-directed trafficking, or conformation-specific agonism). However, the binding of any single particular agonist may also initiate activation of multiple different G-proteins, as it may be capable of stabilizing more than one conformation of the GPCR's GEF domain, even over the course of a single interaction. In addition, a conformation that preferably activates one isoform of Gα may activate another if the preferred is less available. Furthermore, feedback pathways may result in receptor modifications (e.g., phosphorylation) that alter the G-protein preference. Regardless of these various nuances, the GPCR's preferred coupling partner is usually defined according to the G-protein most obviously activated by the endogenous ligand under most physiological or experimental conditions.
Gα signaling
The effector of both the Gαs and Gαi/o pathways is the cyclic-adenosine monophosphate (cAMP)-generating enzyme adenylate cyclase, or AC. While there are ten different AC gene products in mammals, each with subtle differences in tissue distribution or function, all catalyze the conversion of cytosolic adenosine triphosphate (ATP) to cAMP, and all are directly stimulated by G-proteins of the Gαs class. In contrast, however, interaction with Gα subunits of the Gαi/o type inhibits AC from generating cAMP. Thus, a GPCR coupled to Gαs counteracts the actions of a GPCR coupled to Gαi/o, and vice versa. The level of cytosolic cAMP may then determine the activity of various ion channels as well as members of the ser/thr-specific protein kinase A (PKA) family. Thus cAMP is considered a second messenger and PKA a secondary effector.
The effector of the Gαq/11 pathway is phospholipase C-β (PLCβ), which catalyzes the cleavage of membrane-bound phosphatidylinositol 4,5-bisphosphate (PIP2) into the second messengers inositol (1,4,5) trisphosphate (IP3) and diacylglycerol (DAG). IP3 acts on IP3 receptors found in the membrane of the endoplasmic reticulum (ER) to elicit Ca2+ release from the ER, while DAG diffuses along the plasma membrane where it may activate any membrane localized forms of a second ser/thr kinase called protein kinase C (PKC). Since many isoforms of PKC are also activated by increases in intracellular Ca2+, both these pathways can also converge on each other to signal through the same secondary effector. Elevated intracellular Ca2+ also binds and allosterically activates proteins called calmodulins, which in turn tosolic small GTPase, Rho. Once bound to GTP, Rho can then go on to activate various proteins responsible for cytoskeleton regulation such as Rho-kinase (ROCK). Most GPCRs that couple to Gα12/13 also couple to other sub-classes, often Gαq/11.
Gβγ signaling
The above descriptions ignore the effects of Gβγ–signalling, which can also be important, in particular in the case of activated Gαi/o-coupled GPCRs. The primary effectors of Gβγ are various ion channels, such as G-protein-regulated inwardly rectifying K+ channels (GIRKs), P/Q- and N-type voltage-gated Ca2+ channels, as well as some isoforms of AC and PLC, along with some phosphoinositide-3-kinase (PI3K) isoforms.
G-protein-independent signaling
Although they are classically thought of working only together, GPCRs may signal through G-protein-independent mechanisms, and heterotrimeric G-proteins may play functional roles independent of GPCRs. GPCRs may signal independently through many proteins already mentioned for their roles in G-protein-dependent signaling such as β-arrs, GRKs, and Srcs. Such signaling has been shown to be physiologically relevant, for example, β-arrestin signaling mediated by the chemokine receptor CXCR3 was necessary for full efficacy chemotaxis of activated T cells. In addition, further scaffolding proteins involved in subcellular localization of GPCRs (e.g., PDZ-domain-containing proteins) may also act as signal transducers. Most often the effector is a member of the MAPK family.
Examples
In the late 1990s, evidence began accumulating to suggest that some GPCRs are able to signal without G proteins. The ERK2 mitogen-activated protein kinase, a key signal transduction mediator downstream of receptor activation in many pathways, has been shown to be activated in response to cAMP-mediated receptor activation in the slime mold D. discoideum despite the absence of the associated G protein α- and β-subunits.
In mammalian cells, the much-studied β2-adrenoceptor has been demonstrated to activate the ERK2 pathway after arrestin-mediated uncoupling of G-protein-mediated signaling. Therefore, it seems likely that some mechanisms previously believed related purely to receptor desensitisation are actually examples of receptors switching their signaling pathway, rather than simply being switched off.
In kidney cells, the bradykinin receptor B2 has been shown to interact directly with a protein tyrosine phosphatase. The presence of a tyrosine-phosphorylated ITIM (immunoreceptor tyrosine-based inhibitory motif) sequence in the B2 receptor is necessary to mediate this interaction and subsequently the antiproliferative effect of bradykinin.
GPCR-independent signaling by heterotrimeric G-proteins
Although it is a relatively immature area of research, it appears that heterotrimeric G-proteins may also take part in non-GPCR signaling. There is evidence for roles as signal transducers in nearly all other types of receptor-mediated signaling, including integrins, receptor tyrosine kinases (RTKs), cytokine receptors (JAK/STATs), as well as modulation of various other "accessory" proteins such as GEFs, guanine-nucleotide dissociation inhibitors (GDIs) and protein phosphatases. There may even be specific proteins of these classes whose primary function is as part of GPCR-independent pathways, termed activators of G-protein signalling (AGS). Both the ubiquity of these interactions and the importance of Gα vs. Gβγ subunits to these processes are still unclear.
Details of cAMP and PIP2 pathways
There are two principal signal transduction pathways involving the G protein-linked receptors: the cAMP signal pathway and the phosphatidylinositol signal pathway.
cAMP signal pathway
The cAMP signal transduction contains five main characters: stimulative hormone receptor (Rs) or inhibitory hormone receptor (Ri); stimulative regulative G-protein (Gs) or inhibitory regulative G-protein (Gi); adenylyl cyclase; protein kinase A (PKA); and cAMP phosphodiesterase.
Stimulative hormone receptor (Rs) is a receptor that can bind with stimulative signal molecules, while inhibitory hormone receptor (Ri) is a receptor that can bind with inhibitory signal molecules.
Stimulative regulative G-protein is a G-protein linked to stimulative hormone receptor (Rs), and its α subunit upon activation could stimulate the activity of an enzyme or other intracellular metabolism. On the contrary, inhibitory regulative G-protein is linked to an inhibitory hormone receptor, and its α subunit upon activation could inhibit the activity of an enzyme or other intracellular metabolism.
Adenylyl cyclase is a 12-transmembrane glycoprotein that catalyzes the conversion of ATP to cAMP with the help of cofactor Mg2+ or Mn2+. The cAMP produced is a second messenger in cellular metabolism and is an allosteric activator of protein kinase A.
Protein kinase A is an important enzyme in cell metabolism due to its ability to regulate cell metabolism by phosphorylating specific committed enzymes in the metabolic pathway. It can also regulate specific gene expression, cellular secretion, and membrane permeability. The protein enzyme contains two catalytic subunits and two regulatory subunits. When there is no cAMP,the complex is inactive. When cAMP binds to the regulatory subunits, their conformation is altered, causing the dissociation of the regulatory subunits, which activates protein kinase A and allows further biological effects.
These signals then can be terminated by cAMP phosphodiesterase, which is an enzyme that degrades cAMP to 5'-AMP and inactivates protein kinase A.
Phosphatidylinositol signal pathway
In the phosphatidylinositol signal pathway, the extracellular signal molecule binds with the G-protein receptor (Gq) on the cell surface and activates phospholipase C, which is located on the plasma membrane. The lipase hydrolyzes phosphatidylinositol 4,5-bisphosphate (PIP2) into two second messengers: inositol 1,4,5-trisphosphate (IP3) and diacylglycerol (DAG). IP3 binds with the IP3 receptor in the membrane of the smooth endoplasmic reticulum and mitochondria to open Ca2+ channels. DAG helps activate protein kinase C (PKC), which phosphorylates many other proteins, changing their catalytic activities, leading to cellular responses.
The effects of Ca2+ are also remarkable: it cooperates with DAG in activating PKC and can activate the CaM kinase pathway, in which calcium-modulated protein calmodulin (CaM) binds Ca2+, undergoes a change in conformation, and activates CaM kinase II, which has unique ability to increase its binding affinity to CaM by autophosphorylation, making CaM unavailable for the activation of other enzymes. The kinase then phosphorylates target enzymes, regulating their activities. The two signal pathways are connected together by Ca2+-CaM, which is also a regulatory subunit of adenylyl cyclase and phosphodiesterase in the cAMP signal pathway.
Receptor regulation
GPCRs become desensitized when exposed to their ligand for a long period of time. There are two recognized forms of desensitization: 1) homologous desensitization, in which the activated GPCR is downregulated; and 2) heterologous desensitization, wherein the activated GPCR causes downregulation of a different GPCR. The key reaction of this downregulation is the phosphorylation of the intracellular (or cytoplasmic) receptor domain by protein kinases.
Phosphorylation by cAMP-dependent protein kinases
Cyclic AMP-dependent protein kinases (protein kinase A) are activated by the signal chain coming from the G protein (that was activated by the receptor) via adenylate cyclase and cyclic AMP (cAMP). In a feedback mechanism, these activated kinases phosphorylate the receptor. The longer the receptor remains active the more kinases are activated and the more receptors are phosphorylated. In β2-adrenoceptors, this phosphorylation results in the switching of the coupling from the Gs class of G-protein to the Gi class. cAMP-dependent PKA mediated phosphorylation can cause heterologous desensitisation in receptors other than those activated.
Phosphorylation by GRKs
The G protein-coupled receptor kinases (GRKs) are protein kinases that phosphorylate only active GPCRs. G-protein-coupled receptor kinases (GRKs) are key modulators of G-protein-coupled receptor (GPCR) signaling. They constitute a family of seven mammalian serine-threonine protein kinases that phosphorylate agonist-bound receptor. GRKs-mediated receptor phosphorylation rapidly initiates profound impairment of receptor signaling and desensitization. Activity of GRKs and subcellular targeting is tightly regulated by interaction with receptor domains, G protein subunits, lipids, anchoring proteins and calcium-sensitive proteins.
Phosphorylation of the receptor can have two consequences:
Translocation: The receptor is, along with the part of the membrane it is embedded in, brought to the inside of the cell, where it is dephosphorylated within the acidic vesicular environment and then brought back. This mechanism is used to regulate long-term exposure, for example, to a hormone, by allowing resensitisation to follow desensitisation. Alternatively, the receptor may undergo lysozomal degradation, or remain internalised, where it is thought to participate in the initiation of signalling events, the nature of which depending on the internalised vesicle's subcellular localisation.
Arrestin linking: The phosphorylated receptor can be linked to arrestin molecules that prevent it from binding (and activating) G proteins, in effect switching it off for a short period of time. This mechanism is used, for example, with rhodopsin in retina cells to compensate for exposure to bright light. In many cases, arrestin's binding to the receptor is a prerequisite for translocation. For example, beta-arrestin bound to β2-adrenoreceptors acts as an adaptor for binding with clathrin, and with the beta-subunit of AP2 (clathrin adaptor molecules); thus, the arrestin here acts as a scaffold assembling the components needed for clathrin-mediated endocytosis of β2-adrenoreceptors.
Mechanisms of GPCR signal termination
As mentioned above, G-proteins may terminate their own activation due to their intrinsic GTP→GDP hydrolysis capability. However, this reaction proceeds at a slow rate (≈0.02 times/sec) and, thus, it would take around 50 seconds for any single G-protein to deactivate if other factors did not come into play. Indeed, there are around 30 isoforms of RGS proteins that, when bound to Gα through their GAP domain, accelerate the hydrolysis rate to ≈30 times/sec. This 1500-fold increase in rate allows for the cell to respond to external signals with high speed, as well as spatial resolution due to limited amount of second messenger that can be generated and limited distance a G-protein can diffuse in 0.03 seconds. For the most part, the RGS proteins are promiscuous in their ability to deactivate G-proteins, while which RGS is involved in a given signaling pathway seems more determined by the tissue and GPCR involved than anything else. In addition, RGS proteins have the additional function of increasing the rate of GTP-GDP exchange at GPCRs, (i.e., as a sort of co-GEF) further contributing to the time resolution of GPCR signaling.
In addition, the GPCR may be desensitized itself. This can occur as:
a direct result of ligand occupation, wherein the change in conformation allows recruitment of GPCR-Regulating Kinases (GRKs), which go on to phosphorylate various serine/threonine residues of IL-3 and the C-terminal tail. Upon GRK phosphorylation, the GPCR's affinity for β-arrestin (β-arrestin-1/2 in most tissues) is increased, at which point β-arrestin may bind and act to both sterically hinder G-protein coupling as well as initiate the process of receptor internalization through clathrin-mediated endocytosis. Because only the liganded receptor is desensitized by this mechanism, it is called homologous desensitization
the affinity for β-arrestin may be increased in a ligand occupation and GRK-independent manner through phosphorylation of different ser/thr sites (but also of IL-3 and the C-terminal tail) by PKC and PKA. These phosphorylations are often sufficient to impair G-protein coupling on their own as well.
PKC/PKA may, instead, phosphorylate GRKs, which can also lead to GPCR phosphorylation and β-arrestin binding in an occupation-independent manner. These latter two mechanisms allow for desensitization of one GPCR due to the activities of others, or heterologous desensitization. GRKs may also have GAP domains and so may contribute to inactivation through non-kinase mechanisms as well. A combination of these mechanisms may also occur.
Once β-arrestin is bound to a GPCR, it undergoes a conformational change allowing it to serve as a scaffolding protein for an adaptor complex termed AP-2, which in turn recruits another protein called clathrin. If enough receptors in the local area recruit clathrin in this manner, they aggregate and the membrane buds inwardly as a result of interactions between the molecules of clathrin, in a process called opsonization. Once the pit has been pinched off the plasma membrane due to the actions of two other proteins called amphiphysin and dynamin, it is now an endocytic vesicle. At this point, the adapter molecules and clathrin have dissociated, and the receptor is either trafficked back to the plasma membrane or targeted to lysosomes for degradation.
At any point in this process, the β-arrestins may also recruit other proteins—such as the non-receptor tyrosine kinase (nRTK), c-SRC—which may activate ERK1/2, or other mitogen-activated protein kinase (MAPK) signaling through, for example, phosphorylation of the small GTPase, Ras, or recruit the proteins of the ERK cascade directly (i.e., Raf-1, MEK, ERK-1/2) at which point signaling is initiated due to their close proximity to one another. Another target of c-SRC are the dynamin molecules involved in endocytosis. Dynamins polymerize around the neck of an incoming vesicle, and their phosphorylation by c-SRC provides the energy necessary for the conformational change allowing the final "pinching off" from the membrane.
GPCR cellular regulation
Receptor desensitization is mediated through a combination phosphorylation, β-arr binding, and endocytosis as described above. Downregulation occurs when endocytosed receptor is embedded in an endosome that is trafficked to merge with an organelle called a lysosome. Because lysosomal membranes are rich in proton pumps, their interiors have low pH (≈4.8 vs. the pH≈7.2 cytosol), which acts to denature the GPCRs. In addition, lysosomes contain many degradative enzymes, including proteases, which can function only at such low pH, and so the peptide bonds joining the residues of the GPCR together may be cleaved. Whether or not a given receptor is trafficked to a lysosome, detained in endosomes, or trafficked back to the plasma membrane depends on a variety of factors, including receptor type and magnitude of the signal.
GPCR regulation is additionally mediated by gene transcription factors. These factors can increase or decrease gene transcription and thus increase or decrease the generation of new receptors (up- or down-regulation) that travel to the cell membrane.
Receptor oligomerization
G-protein-coupled receptor oligomerisation is a widespread phenomenon. One of the best-studied examples is the metabotropic GABAB receptor. This so-called constitutive receptor is formed by heterodimerization of GABABR1 and GABABR2 subunits. Expression of the GABABR1 without the GABABR2 in heterologous systems leads to retention of the subunit in the endoplasmic reticulum. Expression of the GABABR2 subunit alone, meanwhile, leads to surface expression of the subunit, although with no functional activity (i.e., the receptor does not bind agonist and cannot initiate a response following exposure to agonist). Expression of the two subunits together leads to plasma membrane expression of functional receptor. It has been shown that GABABR2 binding to GABABR1 causes masking of a retention signal of functional receptors.
Origin and diversification of the superfamily
Signal transduction mediated by the superfamily of GPCRs dates back to the origin of multicellularity. Mammalian-like GPCRs are found in fungi, and have been classified according to the GRAFS classification system based on GPCR fingerprints. Identification of the superfamily members across the eukaryotic domain, and comparison of the family-specific motifs, have shown that the superfamily of GPCRs have a common origin. Characteristic motifs indicate that three of the five GRAFS families, Rhodopsin, Adhesion, and Frizzled, evolved from the Dictyostelium discoideum cAMP receptors before the split of opisthokonts. Later, the Secretin family evolved from the Adhesion GPCR receptor family before the split of nematodes. Insect GPCRs appear to be in their own group and Taste2 is identified as descending from Rhodopsin. Note that the Secretin/Adhesion split is based on presumed function rather than signature, as the classical Class B (7tm_2, ) is used to identify both in the studies.
| Biology and health sciences | Cell processes | Biology |
12866 | https://en.wikipedia.org/wiki/Globular%20cluster | Globular cluster | A globular cluster is a spheroidal conglomeration of stars that is bound together by gravity, with a higher concentration of stars towards its center. It can contain anywhere from tens of thousands to many millions of member stars, all orbiting in a stable, compact formation. Globular clusters are similar in form to dwarf spheroidal galaxies, and though globular clusters were long held to be the more luminous of the two, discoveries of outliers had made the distinction between the two less clear by the early 21st century. Their name is derived from Latin (small sphere). Globular clusters are occasionally known simply as "globulars".
Although one globular cluster, Omega Centauri, was observed in antiquity and long thought to be a star, recognition of the clusters' true nature came with the advent of telescopes in the 17th century. In early telescopic observations, globular clusters appeared as fuzzy blobs, leading French astronomer Charles Messier to include many of them in his catalog of astronomical objects that he thought could be mistaken for comets. Using larger telescopes, 18th-century astronomers recognized that globular clusters are groups of many individual stars. Early in the 20th century the distribution of globular clusters in the sky was some of the first evidence that the Sun is far from the center of the Milky Way.
Globular clusters are found in nearly all galaxies. In spiral galaxies like the Milky Way, they are mostly found in the outer spheroidal part of the galaxythe galactic halo. They are the largest and most massive type of star cluster, tending to be older, denser, and composed of lower abundances of heavy elements than open clusters, which are generally found in the disks of spiral galaxies. The Milky Way has more than 150 known globulars, and there may be many more.
Both the origin of globular clusters and their role in galactic evolution are unclear. Some are among the oldest objects in their galaxies and even the universe, constraining estimates of the universe's age. Star clusters were formerly thought to consist of stars that all formed at the same time from one star-forming nebula, but nearly all globular clusters contain stars that formed at different times, or that have differing compositions. Some clusters may have had multiple episodes of star formation, and some may be remnants of smaller galaxies captured by larger galaxies.
History of observations
The first known globular cluster, now called M 22, was discovered in 1665 by Abraham Ihle, a German amateur astronomer. The cluster Omega Centauri, easily visible in the southern sky with the naked eye, was known to ancient astronomers like Ptolemy as a star, but was reclassified as a nebula by Edmond Halley in 1677, then finally as a globular cluster in the early 19th century by John Herschel. The French astronomer Abbé Lacaille listed NGC 104, , M 55, M 69, and in his 1751–1752 catalogue. The low resolution of early telescopes prevented individual stars in a cluster from being visually separated until Charles Messier observed M 4 in 1764.
When William Herschel began his comprehensive survey of the sky using large telescopes in 1782, there were 34 known globular clusters. Herschel discovered another 36 and was the first to resolve virtually all of them into stars. He coined the term globular cluster in his Catalogue of a Second Thousand New Nebulae and Clusters of Stars (1789). In 1914, Harlow Shapley began a series of studies of globular clusters, published across about forty scientific papers. He examined the clusters' RR Lyrae variables (stars which he assumed were Cepheid variables) and used their luminosity and period of variability to estimate the distances to the clusters. RR Lyrae variables were later found to be fainter than Cepheid variables, causing Shapley to overestimate the distances.
A large majority of the Milky Way's globular clusters are found in the halo around the galactic core. In 1918, Shapley used this strongly asymmetrical distribution to determine the overall dimensions of the galaxy. Assuming a roughly spherical distribution of globular clusters around the galaxy's center, he used the positions of the clusters to estimate the position of the Sun relative to the Galactic Center. He correctly concluded that the Milky Way's center is in the Sagittarius constellation and not near the Earth. He overestimated the distance, finding typical globular cluster distances of ; the modern distance to the Galactic Center is roughly . Shapley's measurements indicated the Sun is relatively far from the center of the galaxy, contrary to what had been inferred from the observed uniform distribution of ordinary stars. In reality most ordinary stars lie within the galaxy's disk and are thus obscured by gas and dust in the disk, whereas globular clusters lie outside the disk and can be seen at much greater distances.
The count of known globular clusters in the Milky Way has continued to increase, reaching 83 in 1915, 93 in 1930, 97 by 1947, and 157 in 2010. Additional, undiscovered globular clusters are believed to be in the galactic bulge or hidden by the gas and dust of the Milky Way. For example, most of the Palomar Globular Clusters have only been discovered in the 1950s, with some located relatively close-by yet obscured by dust, while others reside in the very far reaches of the Milky Way halo. The Andromeda Galaxy, which is comparable in size to the Milky Way, may have as many as five hundred globulars. Every galaxy of sufficient mass in the Local Group has an associated system of globular clusters, as does almost every large galaxy surveyed. Some giant elliptical galaxies (particularly those at the centers of galaxy clusters), such as M 87, have as many as 13,000 globular clusters.
Classification
Shapley was later assisted in his studies of clusters by Henrietta Swope and Helen Sawyer Hogg. In 1927–1929, Shapley and Sawyer categorized clusters by the degree of concentration of stars toward each core. Their system, known as the Shapley–Sawyer Concentration Class, identifies the most concentrated clusters as Class I and ranges to the most diffuse Class XII. Astronomers from the Pontifical Catholic University of Chile proposed a new type of globular cluster on the basis of observational data in 2015: Dark globular clusters.
Formation
The formation of globular clusters is poorly understood. Globular clusters have traditionally been described as a simple star population formed from a single giant molecular cloud, and thus with roughly uniform age and metallicity (proportion of heavy elements in their composition). Modern observations show that nearly all globular clusters contain multiple populations; the globular clusters in the Large Magellanic Cloud (LMC) exhibit a bimodal population, for example. During their youth, these LMC clusters may have encountered giant molecular clouds that triggered a second round of star formation. This star-forming period is relatively brief, compared with the age of many globular clusters. It has been proposed that this multiplicity in stellar populations could have a dynamical origin. In the Antennae Galaxy, for example, the Hubble Space Telescope has observed clusters of clustersregions in the galaxy that span hundreds of parsecs, in which many of the clusters will eventually collide and merge. Their overall range of ages and (possibly) metallicities could lead to clusters with a bimodal, or even multiple, distribution of populations.
Observations of globular clusters show that their stars primarily come from regions of more efficient star formation, and from where the interstellar medium is at a higher density, as compared to normal star-forming regions. Globular cluster formation is prevalent in starburst regions and in interacting galaxies. Some globular clusters likely formed in dwarf galaxies and were removed by tidal forces to join the Milky Way. In elliptical and lenticular galaxies there is a correlation between the mass of the supermassive black holes (SMBHs) at their centers and the extent of their globular cluster systems. The mass of the SMBH in such a galaxy is often close to the combined mass of the galaxy's globular clusters.
No known globular clusters display active star formation, consistent with the hypothesis that globular clusters are typically the oldest objects in their galaxy and were among the first collections of stars to form. Very large regions of star formation known as super star clusters, such as Westerlund 1 in the Milky Way, may be the precursors of globular clusters.
Many of the Milky Way's globular clusters have a retrograde orbit (meaning that they revolve around the galaxy in the reverse of the direction the galaxy is rotating), including the most massive, Omega Centauri. Its retrograde orbit suggests it may be a remnant of a dwarf galaxy captured by the Milky Way.
Composition
Globular clusters are generally composed of hundreds of thousands of low-metal, old stars. The stars found in a globular cluster are similar to those in the bulge of a spiral galaxy but confined to a spheroid in which half the light is emitted within a radius of only a few to a few tens of parsecs. They are free of gas and dust, and it is presumed that all the gas and dust was long ago either turned into stars or blown out of the cluster by the massive first-generation stars.
Globular clusters can contain a high density of stars; on average about 0.4stars per cubic parsec, increasing to 100 or 1000stars/pc in the core of the cluster. In comparison, the stellar density around the Sun is roughly 0.1 stars/pc. The typical distance between stars in a globular cluster is about one light year, but at its core the separation between stars averages about a third of a light yearthirteen times closer than the Sun is to its nearest neighbor, Proxima Centauri.
Globular clusters are thought to be unfavorable locations for planetary systems. Planetary orbits are dynamically unstable within the cores of dense clusters because of the gravitational perturbations of passing stars. A planet orbiting at one astronomical unit around a star that is within the core of a dense cluster, such as 47 Tucanae, would survive only on the order of a hundred million years. There is a planetary system orbiting a pulsar (PSRB1620−26) that belongs to the globular cluster M4, but these planets likely formed after the event that created the pulsar.
Some globular clusters, like Omega Centauri in the Milky Way and Mayall II in the Andromeda Galaxy, are extraordinarily massive, measuring several million solar masses () and having multiple stellar populations. Both are evidence that supermassive globular clusters formed from the cores of dwarf galaxies that have been consumed by larger galaxies. About a quarter of the globular cluster population in the Milky Way may have been accreted this way, as with more than 60% of the globular clusters in the outer halo of Andromeda.
Heavy element content
Globular clusters normally consist of Population II stars which, compared with Population I stars such as the Sun, have a higher proportion of hydrogen and helium and a lower proportion of heavier elements. Astronomers refer to these heavier elements as metals (distinct from the material concept) and to the proportions of these elements as the metallicity. Produced by stellar nucleosynthesis, the metals are recycled into the interstellar medium and enter a new generation of stars. The proportion of metals can thus be an indication of the age of a star in simple models, with older stars typically having a lower metallicity.
The Dutch astronomer Pieter Oosterhoff observed two special populations of globular clusters, which became known as Oosterhoff groups. The second group has a slightly longer period of RR Lyrae variable stars. While both groups have a low proportion of metallic elements as measured by spectroscopy, the metal spectral lines in the stars of Oosterhoff typeI (OoI) cluster are not quite as weak as those in typeII (OoII), and so typeI stars are referred to as metal-rich (e.g. Terzan 7), while typeII stars are metal-poor (e.g. ESO 280-SC06). These two distinct populations have been observed in many galaxies, especially massive elliptical galaxies. Both groups are nearly as old as the universe itself and are of similar ages. Suggested scenarios to explain these subpopulations include violent gas-rich galaxy mergers, the accretion of dwarf galaxies, and multiple phases of star formation in a single galaxy. In the Milky Way, the metal-poor clusters are associated with the halo and the metal-rich clusters with the bulge.
A large majority of the metal-poor clusters in the Milky Way are aligned on a plane in the outer part of the galaxy's halo. This observation supports the view that typeII clusters were captured from a satellite galaxy, rather than being the oldest members of the Milky Way's globular cluster system as was previously thought. The difference between the two cluster types would then be explained by a time delay between when the two galaxies formed their cluster systems.
Exotic components
Close interactions and near-collisions of stars occur relatively often in globular clusters because of their high star density. These chance encounters give rise to some exotic classes of starssuch as blue stragglers, millisecond pulsars, and low-mass X-ray binarieswhich are much more common in globular clusters. How blue stragglers form remains unclear, but most models attribute them to interactions between stars, such as stellar mergers, the transfer of material from one star to another, or even an encounter between two binary systems. The resulting star has a higher temperature than other stars in the cluster with comparable luminosity and thus differs from the main-sequence stars formed early in the cluster's existence. Some clusters have two distinct sequences of blue stragglers, one bluer than the other.
Astronomers have searched for black holes within globular clusters since the 1970s. The required resolution for this task is exacting; it is only with the Hubble Space Telescope (HST) that the first claimed discoveries were made, in 2002 and 2003. Based on HST observations, other researchers suggested the existence of a (solar masses) intermediate-mass black hole in the globular cluster M15 and a black hole in the Mayall II cluster of the Andromeda Galaxy. Both X-ray and radio emissions from MayallII appear consistent with an intermediate-mass black hole; however, these claimed detections are controversial.
The heaviest objects in globular clusters are expected to migrate to the cluster center due to mass segregation. One research group pointed out that the mass-to-light ratio should rise sharply towards the center of the cluster, even without a black hole, in both M15 and Mayall II. Observations from 2018 find no evidence for an intermediate-mass black hole in any globular cluster, including M15, but cannot definitively rule out one with a mass of . Finally, in 2023, an analysis of HST and the Gaia spacecraft data from the closest globular cluster, Messier 4, revealed an excess mass of roughly in the center of this cluster, which appears to not be extended. This could thus be considered as kinematic evidence for an intermediate-mass black hole (even if an unusually compact cluster of compact objects like white dwarfs, neutron stars or stellar-mass black holes cannot be completely discounted).
The confirmation of intermediate-mass black holes in globular clusters would have important ramifications for theories of galaxy development as being possible sources for the supermassive black holes at their centers. The mass of these supposed intermediate-mass black holes is proportional to the mass of their surrounding clusters, following a pattern previously discovered between supermassive black holes and their surrounding galaxies.
Hertzsprung–Russell diagrams
Hertzsprung–Russell diagrams (H–R diagrams) of globular clusters allow astronomers to determine many of the properties of their populations of stars. An H–R diagram is a graph of a large sample of stars plotting their absolute magnitude (their luminosity, or brightness measured from a standard distance), as a function of their color index. The color index, roughly speaking, measures the color of the star; positive color indices indicate a reddish star with a cool surface temperature, while negative values indicate a bluer star with a hotter surface. Stars on an H–R diagram mostly lie along a roughly diagonal line sloping from hot, luminous stars in the upper left to cool, faint stars in the lower right. This line is known as the main sequence and represents the primary stage of stellar evolution. The diagram also includes stars in later evolutionary stages such as the cool but luminous red giants.
Constructing an H–R diagram requires knowing the distance to the observed stars to convert apparent into absolute magnitude. Because all the stars in a globular cluster have about the same distance from Earth, a color–magnitude diagram using their observed magnitudes looks like a shifted H–R diagram (because of the roughly constant difference between their apparent and absolute magnitudes). This shift is called the distance modulus and can be used to calculate the distance to the cluster. The modulus is determined by comparing features (like the main sequence) of the cluster's color–magnitude diagram to corresponding features in an H–R diagram of another set of stars, a method known as spectroscopic parallax or main-sequence fitting.
Properties
Since globular clusters form at once from a single giant molecular cloud, a cluster's stars have roughly the same age and composition. A star's evolution is primarily determined by its initial mass, so the positions of stars in a cluster's H–R or color–magnitude diagram mostly reflect their initial masses. A cluster's H–R diagram, therefore, appears quite different from H–R diagrams containing stars of a wide variety of ages. Almost all stars fall on a well-defined curve in globular cluster H–R diagrams, and that curve's shape indicates the age of the cluster. A more detailed H–R diagram often reveals multiple stellar populations as indicated by the presence of closely separated curves, each corresponding to a distinct population of stars with a slightly different age or composition. Observations with the Wide Field Camera 3, installed in 2009 on the Hubble Space Telescope, made it possible to distinguish these slightly different curves.
The most massive main-sequence stars have the highest luminosity and will be the first to evolve into the giant star stage. As the cluster ages, stars of successively lower masses will do the same. Therefore, the age of a single-population cluster can be measured by looking for those stars just beginning to enter the giant star stage, which form a "knee" in the H–R diagram called the main-sequence turnoff, bending to the upper right from the main-sequence line. The absolute magnitude at this bend is directly a function of the cluster's age; an age scale can be plotted on an axis parallel to the magnitude.
The morphology and luminosity of globular cluster stars in H–R diagrams are influenced by numerous parameters, many of which are still actively researched. Recent observations have overturned the historical paradigm that all globular clusters consist of stars born at exactly the same time, or sharing exactly the same chemical abundance. Some clusters feature multiple populations, slightly differing in composition and age; for example, high-precision imagery of cluster NGC 2808 discerned three close, but distinct, main sequences. Further, the placements of the cluster stars in an H–R diagram (including the brightnesses of distance indicators) can be influenced by observational biases. One such effect, called blending, arises when the cores of globular clusters are so dense that observations see multiple stars as a single target. The brightness measured for that seemingly single star is thus incorrecttoo bright, given that multiple stars contributed. In turn, the computed distance is incorrect, so the blending effect can introduce a systematic uncertainty into the cosmic distance ladder and may bias the estimated age of the universe and the Hubble constant.
Consequences
The blue stragglers appear on the H–R diagram as a series diverging from the main sequence in the direction of brighter, bluer stars. White dwarfs (the final remnants of some Sun-like stars), which are much fainter and somewhat hotter than the main-sequence stars, lie on the bottom-left of an H–R diagram. Globular clusters can be dated by looking at the temperatures of the coolest white dwarfs, often giving results as old as 12.7 billion years. In comparison, open clusters are rarely older than about half a billion years. The ages of globular clusters place a lower bound on the age of the entire universe, presenting a significant constraint in cosmology. Astronomers were historically faced with age estimates of clusters older than their cosmological models would allow, but better measurements of cosmological parameters, through deep sky surveys and satellites, appear to have resolved this issue.
Studying globular clusters sheds light on how the composition of the formational gas and dust affects stellar evolution; the stars' evolutionary tracks vary depending on the abundance of heavy elements. Data obtained from these studies are then used to study the evolution of the Milky Way as a whole.
Morphology
In contrast to open clusters, most globular clusters remain gravitationally bound together for time periods comparable to the lifespans of most of their stars. Strong tidal interactions with other large masses result in the dispersal of some stars, leaving behind "tidal tails" of stars removed from the cluster.
After formation, the stars in the globular cluster begin to interact gravitationally with each other. The velocities of the stars steadily change, and the stars lose any history of their original velocity. The characteristic interval for this to occur is the relaxation time, related to the characteristic length of time a star needs to cross the cluster and the number of stellar masses. The relaxation time varies by cluster, but a typical value is on the order of one billion years.
Although globular clusters are generally spherical in form, ellipticity can form via tidal interactions. Clusters within the Milky Way and the Andromeda Galaxy are typically oblate spheroids in shape, while those in the Large Magellanic Cloud are more elliptical.
Radii
Astronomers characterize the morphology (shape) of a globular cluster by means of standard radii: the core radius (rc), the half-light radius (rh), and the tidal or Jacobi radius (rt). The radius can be expressed as a physical distance or as a subtended angle in the sky. Considering a radius around the core, the surface luminosity of the cluster steadily decreases with distance, and the core radius is the distance at which the apparent surface luminosity has dropped by half. A comparable quantity is the half-light radius, or the distance from the core containing half the total luminosity of the cluster; the half-light radius is typically larger than the core radius.
Most globular clusters have a half-light radius of less than ten parsecs (pc), although some globular clusters have very large radii, like NGC 2419 (rh = 18 pc) and Palomar 14 (rh = 25 pc). The half-light radius includes stars in the outer part of the cluster that happen to lie along the line of sight, so theorists also use the half-mass radius (rm)the radius from the core that contains half the total mass of the cluster. A small half-mass radius, relative to the overall size, indicates a dense core. Messier 3 (M3), for example, has an overall visible dimension of about 18 arc minutes, but a half-mass radius of only 1.12 arc minutes.
The tidal radius, or Hill sphere, is the distance from the center of the globular cluster at which the external gravitation of the galaxy has more influence over the stars in the cluster than does the cluster itself. This is the distance at which the individual stars belonging to a cluster can be separated away by the galaxy. The tidal radius of M3, for example, is about forty arc minutes, or about 113 pc.
Mass segregation, luminosity and core collapse
In most Milky Way clusters, the surface brightness of a globular cluster as a function of decreasing distance to the core first increases, then levels off at a distance typically 1–2 parsecs from the core. About 20% of the globular clusters have undergone a process termed "core collapse". The luminosity in such a cluster increases steadily all the way to the core region.
Models of globular clusters predict that core collapse occurs when the more massive stars in a globular cluster encounter their less massive counterparts. Over time, dynamic processes cause individual stars to migrate from the center of the cluster to the outside, resulting in a net loss of kinetic energy from the core region and leading the region's remaining stars to occupy a more compact volume. When this gravothermal instability occurs, the central region of the cluster becomes densely crowded with stars, and the surface brightness of the cluster forms a power-law cusp. A massive black hole at the core could also result in a luminosity cusp. Over a long time, this leads to a concentration of massive stars near the core, a phenomenon called mass segregation.
The dynamical heating effect of binary star systems works to prevent an initial core collapse of the cluster. When a star passes near a binary system, the orbit of the latter pair tends to contract, releasing energy. Only after this primordial supply of energy is exhausted can a deeper core collapse proceed. In contrast, the effect of tidal shocks as a globular cluster repeatedly passes through the plane of a spiral galaxy tends to significantly accelerate core collapse.
Core collapse may be divided into three phases. During a cluster's adolescence, core collapse begins with stars nearest the core. Interactions between binary star systems prevents further collapse as the cluster approaches middle age. The central binaries are either disrupted or ejected, resulting in a tighter concentration at the core. The interaction of stars in the collapsed core region causes tight binary systems to form. As other stars interact with these tight binaries they increase the energy at the core, causing the cluster to re-expand. As the average time for a core collapse is typically less than the age of the galaxy, many of a galaxy's globular clusters may have passed through a core collapse stage, then re-expanded.
The HST has provided convincing observational evidence of this stellar mass-sorting process in globular clusters. Heavier stars slow down and crowd at the cluster's core, while lighter stars pick up speed and tend to spend more time at the cluster's periphery. The cluster 47 Tucanae, made up of about one million stars, is one of the densest globular clusters in the Southern Hemisphere. This cluster was subjected to an intensive photographic survey that obtained precise velocities for nearly fifteen thousand stars in this cluster.
The overall luminosities of the globular clusters within the Milky Way and the Andromeda Galaxy each have a roughly Gaussian distribution, with an average magnitude Mv and a variance σ2. This distribution of globular cluster luminosities is called the Globular Cluster Luminosity Function (GCLF). For the Milky Way, Mv = , σ = . The GCLF has been used as a "standard candle" for measuring the distance to other galaxies, under the assumption that globular clusters in remote galaxies behave similarly to those in the Milky Way.
N-body simulations
Computing the gravitational interactions between stars within a globular cluster requires solving the N-body problem. The naive computational cost for a dynamic simulation increases in proportion to N 2 (where N is the number of objects), so the computing requirements to accurately simulate a cluster of thousands of stars can be enormous. A more efficient method of simulating the N-body dynamics of a globular cluster is done by subdivision into small volumes and velocity ranges, and using probabilities to describe the locations of the stars. Their motions are described by means of the Fokker–Planck equation, often using a model describing the mass density as a function of radius, such as a Plummer model. The simulation becomes more difficult when the effects of binaries and the interaction with external gravitation forces (such as from the Milky Way galaxy) must also be included. In 2010 a low-density globular cluster's lifetime evolution was able to be directly computed, star-by-star.
Completed N-body simulations have shown that stars can follow unusual paths through the cluster, often forming loops and falling more directly toward the core than would a single star orbiting a central mass. Additionally, some stars gain sufficient energy to escape the cluster due to gravitational interactions that result in a sufficient increase in velocity. Over long periods of time this process leads to the dissipation of the cluster, a process termed evaporation. The typical time scale for the evaporation of a globular cluster is 1010 years. The ultimate fate of a globular cluster must be either to accrete stars at its core, causing its steady contraction, or gradual shedding of stars from its outer layers.
Binary stars form a significant portion of stellar systems, with up to half of all field stars and open cluster stars occurring in binary systems. The present-day binary fraction in globular clusters is difficult to measure, and any information about their initial binary fraction is lost by subsequent dynamical evolution. Numerical simulations of globular clusters have demonstrated that binaries can hinder and even reverse the process of core collapse in globular clusters. When a star in a cluster has a gravitational encounter with a binary system, a possible result is that the binary becomes more tightly bound and kinetic energy is added to the solitary star. When the massive stars in the cluster are sped up by this process, it reduces the contraction at the core and limits core collapse.
Intermediate forms
Cluster classification is not always definitive; objects have been found that can be classified in more than one category. For example, BH 176 in the southern part of the Milky Way has properties of both an open and a globular cluster.
In 2005 astronomers discovered a new, "extended" type of star cluster in the Andromeda Galaxy's halo, similar to the globular cluster. The three new-found clusters have a similar star count to globular clusters and share other characteristics, such as stellar populations and metallicity, but are distinguished by their larger sizeseveral hundred light years acrossand some hundred times lower density. Their stars are separated by larger distances; parametrically, these clusters lie somewhere between a globular cluster and a dwarf spheroidal galaxy.
The formation of these extended clusters is likely related to accretion. It is unclear why the Milky Way lacks such clusters; Andromeda is unlikely to be the sole galaxy with them, but their presence in other galaxies remains unknown.
Tidal encounters
When a globular cluster comes close to a large mass, such as the core region of a galaxy, it undergoes a tidal interaction. The difference in gravitational strength between the nearer and further parts of the cluster results in an asymmetric, tidal force. A "tidal shock" occurs whenever the orbit of a cluster takes it through the plane of a galaxy.
Tidal shocks can pull stars away from the cluster halo, leaving only the core part of the cluster; these trails of stars can extend several degrees away from the cluster. These tails typically both precede and follow the cluster along its orbit and can accumulate significant portions of the original mass of the cluster, forming clump-like features. The globular cluster Palomar 5, for example, is near the apogalactic point of its orbit after passing through the Milky Way. Streams of stars extend outward toward the front and rear of the orbital path of this cluster, stretching to distances of 13,000 light years. Tidal interactions have stripped away much of Palomar5's mass; further interactions with the galactic core are expected to transform it into a long stream of stars orbiting the Milky Way in its halo.
The Milky Way is in the process of tidally stripping the Sagittarius Dwarf Spheroidal Galaxy of stars and globular clusters through the Sagittarius Stream. As many as 20% of the globular clusters in the Milky Way's outer halo may have originated in that galaxy. Palomar 12, for example, most likely originated in the Sagittarius Dwarf Spheroidal but is now associated with the Milky Way. Tidal interactions like these add kinetic energy into a globular cluster, dramatically increasing the evaporation rate and shrinking the size of the cluster. The increased evaporation accelerates the process of core collapse.
Planets
Astronomers are searching for exoplanets of stars in globular star clusters. A search in 2000 for giant planets in the globular cluster came up negative, suggesting that the abundance of heavier elements – low in globular clusters – necessary to build these planets may need to be at least 40% of the Sun's abundance. Because terrestrial planets are built from heavier elements such as silicon, iron and magnesium, member stars have a far lower likelihood of hosting Earth-mass planets than stars in the solar neighborhood. Globular clusters are thus unlikely to host habitable terrestrial planets.
A giant planet was found in the globular cluster , orbiting a pulsar in the binary star system . The planet's eccentric and highly inclined orbit suggests it may have been formed around another star in the cluster, then "exchanged" into its current arrangement. The likelihood of close encounters between stars in a globular cluster can disrupt planetary systems; some planets break free to become rogue planets, orbiting the galaxy. Planets orbiting close to their star can become disrupted, potentially leading to orbital decay and an increase in orbital eccentricity and tidal effects. In 2024, a gas giant or brown dwarf was found to closely orbit the pulsar "M62H", where the name indicates that the planetary system belongs to the globular cluster Messier 62.
| Physical sciences | Stellar astronomy | Astronomy |
12882 | https://en.wikipedia.org/wiki/Gallon | Gallon | The gallon is a unit of volume in British imperial units and United States customary units. Three different versions are in current use:
the imperial gallon (imp gal), defined as , which is or was used in the United Kingdom, Ireland, Canada, Australia, New Zealand, and some Caribbean countries;
the US liquid gallon (US gal), defined as ), which is used in the United States and some Latin American and Caribbean countries; and
the US dry gallon, defined as US bushel (exactly ).
There are two pints in a quart and four quarts in a gallon. Different sizes of pints account for the different sizes of the imperial and US gallons.
The IEEE standard symbol for both US (liquid) and imperial gallon is gal, not to be confused with the gal (symbol: Gal), a CGS unit of acceleration.
Definitions
The gallon currently has one definition in the imperial system, and two definitions (liquid and dry) in the US customary system. Historically, there were many definitions and redefinitions.
English system gallons
There were a number of systems of liquid measurements in the United Kingdom prior to the 19th century.
Winchester or corn gallon was (1697 act 8 & 9 Will. 3. c. 22)
Henry VII (Winchester) corn gallon from 1497 onwards was
Elizabeth I corn gallon from 1601 onwards was
William III corn gallon from 1697 onwards was
Old English (Elizabethan) ale gallon was (Ale Measures Act 1698 (11 Will. 3. c. 15))
London 'Guildhall' gallon (before 1688) was then
Old English (Queen Anne) wine gallon was standardized as in the 1706 act 6 Ann. c. 27:
Jersey gallon (from 1562 onwards) was
Guernsey gallon (17th century origins until 1917) was
Irish gallon was (Poynings' Act 1495 (10 Hen. 7. c. 22 (I)) confirmed by 1736 act 9 Geo. 2. c. 9 (I))
Imperial gallon
The British imperial gallon (frequently called simply "gallon") is defined as exactly 4.54609 dm3 (4.54609 litres). It is used in some Commonwealth countries, and until 1976 was defined as the volume of water at whose mass is . There are four imperial quarts in a gallon, two imperial pints in a quart, and there are 20 imperial fluid ounces in an imperial pint, yielding 160 fluid ounces in an imperial gallon.
US liquid gallon
The US liquid gallon (frequently called simply "gallon") is legally defined as 231 cubic inches, which is exactly . A US liquid gallon can contain about of water at , and is about 16.7% less than the imperial gallon. There are four quarts in a gallon, two pints in a quart and 16 US fluid ounces in a US pint, which makes the US fluid ounce equal to of a US gallon.
In order to overcome the effects of expansion and contraction with temperature when using a gallon to specify a quantity of material for purposes of trade, it is common to define the temperature at which the material will occupy the specified volume. For example, the volume of petroleum products and alcoholic beverages are both referenced to in government regulations.
US dry gallon
Since the dry measure is one-eighth of a US Winchester bushel of cubic inches, it is equal to exactly 268.8025 cubic inches, which is . The US dry gallon is not used in commerce, and is also not listed in the relevant statute, which jumps from the dry pint to the bushel.
Worldwide usage
Imperial gallon
As of 2021, the imperial gallon continues to be used as the standard petrol unit on 10 Caribbean island groups, consisting of:
four British Overseas Territories (Anguilla, the British Virgin Islands, the Cayman Islands, and Montserrat) and
six countries (Antigua and Barbuda, Dominica, Grenada, Saint Christopher and Nevis, Saint Lucia, and Saint Vincent and the Grenadines).
All 12 of the Caribbean islands use miles per hour for speed limits signage, and drive on the left side of the road.
The United Arab Emirates ceased selling petrol by the imperial gallon in 2010 and switched to the litre, with Guyana following suit in 2013. In 2014, Myanmar switched from the imperial gallon to the litre.
Antigua and Barbuda has proposed switching to selling petrol by litres since 2015.
In the European Union the gallon was removed from the list of legally defined primary units of measure catalogue in the EU directive 80/181/EEC for trading and official purposes, effective from 31 December 1994. Under the directive the gallon could still be used, but only as a supplementary or secondary unit.
As a result of the EU directive Ireland and the United Kingdom passed legislation to replace the gallon with the litre as a primary unit of measure in trade and in the conduct of public business, effective from 31 December 1993, and 30 September 1995 respectively. Though the gallon has ceased to be a primary unit of trade, it can still be legally used in both the UK and Ireland as a supplementary unit. However, barrels and large containers of beer, oil and other fluids are commonly measured in multiples of an imperial gallon.
Miles per imperial gallon is used as the primary fuel economy unit in the United Kingdom and as a supplementary unit in Canada on official documentation.
US liquid gallon
Other than the United States, petrol is sold by the US gallon in 12 other countries and four US territories:
the Caribbean countries of Dominican Republic and Haiti,
the Central American countries of Belize, Guatemala, and Nicaragua,
the South American countries of Colombia, Ecuador, and Peru,
the Pacific Ocean countries of Marshall Islands, Federated States of Micronesia, and Palau, which are associated countries of the United States,
the African country of Liberia, a former protectorate of the United States, and
the US territories of American Samoa, the Northern Mariana Islands, Guam, and the US Virgin Islands. Puerto Rico ceased selling petrol by the US gallon in 1980.
The latest country to cease using the gallon is El Salvador in June 2021.
The Imperial and US liquid gallon
Both the US gallon and imperial gallon are used in the Turks and Caicos Islands, due to an increase in tax duties which was disguised by levying the same duty on the US gallon (3.79 L) as was previously levied on the Imperial gallon (4.55 L), and the Bahamas.
Legacy
In some parts of the Middle East, such as the United Arab Emirates and Bahrain, 18.9-litre water cooler bottles are marketed as five-gallon bottles.
Relationship to other units
Both the US liquid and imperial gallon are divided into four quarts (quarter gallons), which in turn are divided into two pints, which in turn are divided into two cups (not in customary use outside the US), which in turn are further divided into two gills. Thus, both gallons are equal to four quarts, eight pints, sixteen cups, or thirty-two gills.
The imperial gill is further divided into five fluid ounces, whereas the US gill is divided into four fluid ounces, meaning an imperial fluid ounce is of an imperial pint, or of an imperial gallon, while a US fluid ounce is of a US pint, or of a US gallon. Thus, the imperial gallon, quart, pint, cup and gill are approximately 20% larger than their US counterparts, meaning these are not interchangeable, but the imperial fluid ounce is only approximately 4% smaller than the US fluid ounce, meaning these are often used interchangeably.
Historically, a common bottle size for liquor in the US was the "fifth", i.e. one-fifth of a US gallon (or one-sixth of an imperial gallon). While spirit sales in the US were switched to metric measures in 1976, a 750 mL bottle is still sometimes known as a "fifth".
History
The term derives most immediately from galun, galon in Old Norman French, but the usage was common in several languages, for example in Old French and (bowl) in Old English. This suggests a common origin in Romance Latin, but the ultimate source of the word is unknown.
The gallon originated as the base of systems for measuring wine and beer in England. The sizes of gallon used in these two systems were different from each other: the first was based on the wine gallon (equal in size to the US gallon), and the second one either the ale gallon or the larger imperial gallon.
By the end of the 18th century, there were three definitions of the gallon in common use:
The corn gallon, or Winchester gallon, of about ,
The wine gallon, or Queen Anne's gallon, which was , and
The ale gallon of .
The corn or dry gallon is used (along with the dry quart and pint) in the United States for grain and other dry commodities. It is one-eighth of the (Winchester) bushel, originally defined as a cylindrical measure of inches in diameter and 8 inches in depth, which made the bushel . The bushel was later defined to be 2150.42 cubic inches exactly, thus making its gallon exactly (); in previous centuries, there had been a corn gallon of between 271 and 272 cubic inches.
The wine, fluid, or liquid gallon has been the standard US gallon since the early 19th century. The wine gallon, which some sources relate to the volume occupied by eight medieval merchant pounds of wine, was at one time defined as the volume of a cylinder 6 inches deep and 7 inches in diameter, i.e. . It was redefined during the reign of Queen Anne in 1706 as 231 cubic inches exactly, the earlier definition with approximated to .
Although the wine gallon had been used for centuries for import duty purposes, there was no legal standard of it in the Exchequer, while a smaller gallon was actually in use, requiring this statute; the 231 cubic inch gallon remains the US definition today.
In 1824, Britain adopted a close approximation to the ale gallon known as the imperial gallon, and abolished all other gallons in favour of it. Inspired by the kilogram-litre relationship, the imperial gallon was based on the volume of 10 pounds of distilled water weighed in air with brass weights with the barometer standing at and at a temperature of .
In 1963, this definition was refined as the space occupied by 10 pounds of distilled water of density weighed in air of density against weights of density (the original "brass" was refined as the densities of brass alloys vary depending on metallurgical composition), which was calculated as to ten significant figures.
The precise definition of exactly cubic decimetres (also , ≈ ) came after the litre was redefined in 1964. This was adopted shortly afterwards in Canada, and adopted in 1976 in the United Kingdom.
Sizes of gallons
Historically, gallons of various sizes were used in many parts of Western Europe. In these localities, it has been replaced as the unit of capacity by the litre.
| Physical sciences | Volume | null |
12891 | https://en.wikipedia.org/wiki/Gene%20therapy | Gene therapy | Gene therapy is a medical technology that aims to produce a therapeutic effect through the manipulation of gene expression or through altering the biological properties of living cells.
The first attempt at modifying human DNA was performed in 1980, by Martin Cline, but the first successful nuclear gene transfer in humans, approved by the National Institutes of Health, was performed in May 1989. The first therapeutic use of gene transfer as well as the first direct insertion of human DNA into the nuclear genome was performed by French Anderson in a trial starting in September 1990. Between 1989 and December 2018, over 2,900 clinical trials were conducted, with more than half of them in phase I. In 2003, Gendicine became the first gene therapy to receive regulatory approval. Since that time, further gene therapy drugs were approved, such as alipogene tiparvovec (2012), Strimvelis (2016), tisagenlecleucel (2017), voretigene neparvovec (2017), patisiran (2018), onasemnogene abeparvovec (2019), idecabtagene vicleucel (2021), nadofaragene firadenovec, valoctocogene roxaparvovec and etranacogene dezaparvovec (all 2022). Most of these approaches utilize adeno-associated viruses (AAVs) and lentiviruses for performing gene insertions, in vivo and ex vivo, respectively. AAVs are characterized by stabilizing the viral capsid, lower immunogenicity, ability to transduce both dividing and nondividing cells, the potential to integrate site specifically and to achieve long-term expression in the in-vivo treatment. ASO / siRNA approaches such as those conducted by Alnylam and Ionis Pharmaceuticals require non-viral delivery systems, and utilize alternative mechanisms for trafficking to liver cells by way of GalNAc transporters.
Not all medical procedures that introduce alterations to a patient's genetic makeup can be considered gene therapy. Bone marrow transplantation and organ transplants in general have been found to introduce foreign DNA into patients.
Background
Gene therapy was first conceptualized in the 1960s, when the feasibility of adding new genetic functions to mammalian cells began to be researched. Several methods to do so were tested, including injecting genes with a micropipette directly into a living mammalian cell, and exposing cells to a precipitate of DNA that contained the desired genes. Scientists theorized that a virus could also be used as a vehicle, or vector, to deliver new genes into cells.
One of the first scientists to report the successful direct incorporation of functional DNA into a mammalian cell was biochemist Dr. Lorraine Marquardt Kraus (6 September 1922 – 1 July 2016) at the University of Tennessee Health Science Center in Memphis, Tennessee. In 1961, she managed to genetically alter the hemoglobin of cells from bone marrow taken from a patient with sickle cell anaemia. She did this by incubating the patient's cells in tissue culture with DNA extracted from a donor with normal hemoglobin. In 1968, researchers Theodore Friedmann, Jay Seegmiller, and John Subak-Sharpe at the National Institutes of Health (NIH), Bethesda, in the United States successfully corrected genetic defects associated with Lesch-Nyhan syndrome, a debilitating neurological disease, by adding foreign DNA to cultured cells collected from patients suffering from the disease.
The first attempt, an unsuccessful one, at gene therapy (as well as the first case of medical transfer of foreign genes into humans not counting organ transplantation) was performed by geneticist Martin Cline of the University of California, Los Angeles in California, United States on 10 July 1980. Cline claimed that one of the genes in his patients was active six months later, though he never published this data or had it verified.
After extensive research on animals throughout the 1980s and a 1989 bacterial gene tagging trial on humans, the first gene therapy widely accepted as a success was demonstrated in a trial that started on 14 September 1990, when Ashanthi DeSilva was treated for ADA-SCID.
The first somatic treatment that produced a permanent genetic change was initiated in 1993. The goal was to cure malignant brain tumors by using recombinant DNA to transfer a gene making the tumor cells sensitive to a drug that in turn would cause the tumor cells to die.
The polymers are either translated into proteins, interfere with target gene expression, or possibly correct genetic mutations. The most common form uses DNA that encodes a functional, therapeutic gene to replace a mutated gene. The polymer molecule is packaged within a "vector", which carries the molecule inside cells.
Early clinical failures led to dismissals of gene therapy. Clinical successes since 2006 regained researchers' attention, although , it was still largely an experimental technique. These include treatment of retinal diseases Leber's congenital amaurosis and choroideremia, X-linked SCID, ADA-SCID, adrenoleukodystrophy, chronic lymphocytic leukemia (CLL), acute lymphocytic leukemia (ALL), multiple myeloma, haemophilia, and Parkinson's disease. Between 2013 and April 2014, US companies invested over $600 million in the field.
The first commercial gene therapy, Gendicine, was approved in China in 2003, for the treatment of certain cancers. In 2011, Neovasculgen was registered in Russia as the first-in-class gene-therapy drug for treatment of peripheral artery disease, including critical limb ischemia.
In 2012, alipogene tiparvovec, a treatment for a rare inherited disorder, lipoprotein lipase deficiency, became the first treatment to be approved for clinical use in either the European Union or the United States after its endorsement by the European Commission.
Following early advances in genetic engineering of bacteria, cells, and small animals, scientists started considering how to apply it to medicine. Two main approaches were considered – replacing or disrupting defective genes. Scientists focused on diseases caused by single-gene defects, such as cystic fibrosis, haemophilia, muscular dystrophy, thalassemia, and sickle cell anemia. alipogene tiparvovec treats one such disease, caused by a defect in lipoprotein lipase.
DNA must be administered, reach the damaged cells, enter the cell and either express or disrupt a protein. Multiple delivery techniques have been explored. The initial approach incorporated DNA into an engineered virus to deliver the DNA into a chromosome. Naked DNA approaches have also been explored, especially in the context of vaccine development.
Generally, efforts focused on administering a gene that causes a needed protein to be expressed. More recently, increased understanding of nuclease function has led to more direct DNA editing, using techniques such as zinc finger nucleases and CRISPR. The vector incorporates genes into chromosomes. The expressed nucleases then knock out and replace genes in the chromosome. these approaches involve removing cells from patients, editing a chromosome and returning the transformed cells to patients.
Gene editing is a potential approach to alter the human genome to treat genetic diseases, viral diseases, and cancer. these approaches are being studied in clinical trials.
Classification
Breadth of definition
In 1986, a meeting at the Institute Of Medicine defined gene therapy as the addition or replacement of a gene in a targeted cell type. In the same year, the FDA announced that it had jurisdiction over approving "gene therapy" without defining the term. The FDA added a very broad definition in 1993 of any treatment that would 'modify or manipulate the expression of genetic material or to alter the biological properties of living cells'. In 2018 this was narrowed to 'products that mediate their effects by transcription or translation of transferred genetic material or by specifically altering host (human) genetic sequences'.
Writing in 2018, in the Journal of Law and the Biosciences, Sherkow et al. argued for a narrower definition of gene therapy than the FDA's in light of new technology that would consist of any treatment that intentionally and permanently modified a cell's genome, with the definition of genome including episomes outside the nucleus but excluding changes due to episomes that are lost over time. This definition would also exclude introducing cells that did not derive from a patient themselves, but include ex vivo approaches, and would not depend on the vector used.
During the COVID-19 pandemic, some academics insisted that the mRNA vaccines for COVID were not gene therapy to prevent the spread of incorrect information that the vaccine could alter DNA, other academics maintained that the vaccines were a gene therapy because they introduced genetic material into a cell. Fact-checkers, such as Full Fact, Reuters, PolitiFact, and FactCheck.org said that calling the vaccines a gene therapy was incorrect. Podcast host Joe Rogan was criticized for calling mRNA vaccines gene therapy as was British politician Andrew Bridgen, with fact checker Full Fact calling for Bridgen to be removed from the conservative party for this and other statements.
Genes present or added
Gene therapy encapsulates many forms of adding different nucleic acids to a cell. Gene augmentation adds a new protein coding gene to a cell. One form of gene augmentiation is gene replacement therapy, a treatment for monogenic recessive disorders where a single gene is not functional an additional functional gene is added. For diseases caused by multiple genes or a dominant gene, gene silencing or gene editing approaches are more appropriate but gene addition, a form of gene augmentation where new gene is added, may improve a cells function without modifying the genes that cause a disorder.
Cell types
Gene therapy may be classified into two types by the type of cell it affects: somatic cell and germline gene therapy.
In somatic cell gene therapy (SCGT), the therapeutic genes are transferred into any cell other than a gamete, germ cell, gametocyte, or undifferentiated stem cell. Any such modifications affect the individual patient only, and are not inherited by offspring. Somatic gene therapy represents mainstream basic and clinical research, in which therapeutic DNA (either integrated in the genome or as an external episome or plasmid) is used to treat disease. Over 600 clinical trials utilizing SCGT are underway in the US. Most focus on severe genetic disorders, including immunodeficiencies, haemophilia, thalassaemia, and cystic fibrosis. Such single gene disorders are good candidates for somatic cell therapy. The complete correction of a genetic disorder or the replacement of multiple genes is not yet possible. Only a few of the trials are in the advanced stages.
In germline gene therapy (GGT), germ cells (sperm or egg cells) are modified by the introduction of functional genes into their genomes. Modifying a germ cell causes all the organism's cells to contain the modified gene. The change is therefore heritable and passed on to later generations. Australia, Canada, Germany, Israel, Switzerland, and the Netherlands prohibit GGT for application in human beings, for technical and ethical reasons, including insufficient knowledge about possible risks to future generations and higher risks versus SCGT. The US has no federal controls specifically addressing human genetic modification (beyond FDA regulations for therapies in general).
In vivo versus ex vivo therapies
In in vivo gene therapy, a vector (typically, a virus) is introduced to the patient, which then achieves the desired biological effect by passing the genetic material (e.g. for a missing protein) into the patient's cells. In ex vivo gene therapies, such as CAR-T therapeutics, the patient's own cells (autologous) or healthy donor cells (allogeneic) are modified outside the body (hence, ex vivo) using a vector to express a particular protein, such as a chimeric antigen receptor.
In vivo gene therapy is seen as simpler, since it does not require the harvesting of mitotic cells. However, ex vivo gene therapies are better tolerated and less associated with severe immune responses. The death of Jesse Gelsinger in a trial of an adenovirus-vectored treatment for ornithine transcarbamylase deficiency due to a systemic inflammatory reaction led to a temporary halt on gene therapy trials across the United States. , in vivo and ex vivo therapeutics are both seen as safe.
Gene editing
The concept of gene therapy is to fix a genetic problem at its source. If, for instance, a mutation in a certain gene causes the production of a dysfunctional protein resulting (usually recessively) in an inherited disease, gene therapy could be used to deliver a copy of this gene that does not contain the deleterious mutation and thereby produces a functional protein. This strategy is referred to as gene replacement therapy and could be employed to treat inherited retinal diseases.
While the concept of gene replacement therapy is mostly suitable for recessive diseases, novel strategies have been suggested that are capable of also treating conditions with a dominant pattern of inheritance.
The introduction of CRISPR gene editing has opened new doors for its application and utilization in gene therapy, as instead of pure replacement of a gene, it enables correction of the particular genetic defect. Solutions to medical hurdles, such as the eradication of latent human immunodeficiency virus (HIV) reservoirs and correction of the mutation that causes sickle cell disease, may be available as a therapeutic option in the future.
Prosthetic gene therapy aims to enable cells of the body to take over functions they physiologically do not carry out. One example is the so-called vision restoration gene therapy, that aims to restore vision in patients with end-stage retinal diseases. In end-stage retinal diseases, the photoreceptors, as the primary light sensitive cells of the retina are irreversibly lost. By the means of prosthetic gene therapy light sensitive proteins are delivered into the remaining cells of the retina, to render them light sensitive and thereby enable them to signal visual information towards the brain.
In vivo, gene editing systems using CRISPR have been used in studies with mice to treat cancer and have been effective at reducing tumors. In vitro, the CRISPR system has been used to treat HPV cancer tumors. Adeno-associated virus, Lentivirus based vectors have been to introduce the genome for the CRISPR system.
Vectors
The delivery of DNA into cells can be accomplished by multiple methods. The two major classes are recombinant viruses (sometimes called biological nanoparticles or viral vectors) and naked DNA or DNA complexes (non-viral methods).
Viruses
In order to replicate, viruses introduce their genetic material into the host cell, tricking the host's cellular machinery into using it as blueprints for viral proteins. Retroviruses go a stage further by having their genetic material copied into the nuclear genome of the host cell. Scientists exploit this by substituting part of a virus's genetic material with therapeutic DNA or RNA. Like the genetic material (DNA or RNA) in viruses, therapeutic genetic material can be designed to simply serve as a temporary blueprint that degrades naturally, as in a non-integrative vectors, or to enter the host's nucleus becoming a permanent part of the host's nuclear DNA in infected cells.
A number of viruses have been used for human gene therapy, including viruses such as lentivirus, adenoviruses, herpes simplex, vaccinia, and adeno-associated virus.
Adenovirus viral vectors (Ad) temporarily modify a cell's genetic expression with genetic material that is not integrated into the host cell's DNA. As of 2017, such vectors were used in 20% of trials for gene therapy. Adenovirus vectors are mostly used in cancer treatments and novel genetic vaccines such as the Ebola vaccine, vaccines used in clinical trials for HIV and SARS-CoV-2, or cancer vaccines.
Lentiviral vectors based on lentivirus, a retrovirus, can modify a cell's nuclear genome to permanently express a gene, although vectors can be modified to prevent integration. Retroviruses were used in 18% of trials before 2018. Libmeldy is an ex vivo stem cell treatment for metachromatic leukodystrophy which uses a lentiviral vector and was approved by the European medical agency in 2020.
Adeno-associated virus (AAV) is a virus that is incapable of transmission between cells unless the cell is infected by another virus, a helper virus. Adenovirus and the herpes viruses act as helper viruses for AAV. AAV persists within the cell outside of the cell's nuclear genome for an extended period of time through the formation of concatemers mostly organized as episomes. Genetic material from AAV vectors is integrated into the host cell's nuclear genome at a low frequency and likely mediated by the DNA-modifying enzymes of the host cell. Animal models suggest that integration of AAV genetic material into the host cell's nuclear genome may cause hepatocellular carcinoma, a form of liver cancer. Several AAV investigational agents have been explored in treatment of wet age related macular degeneration by both intravitreal and subretinal approaches as a potential application of AAV gene therapy for human disease.
Non-viral
Non-viral vectors for gene therapy present certain advantages over viral methods, such as large scale production and low host immunogenicity. However, non-viral methods initially produced lower levels of transfection and gene expression, and thus lower therapeutic efficacy. Newer technologies offer promise of solving these problems, with the advent of increased cell-specific targeting and subcellular trafficking control.
Methods for non-viral gene therapy include the injection of naked DNA, electroporation, the gene gun, sonoporation, magnetofection, the use of oligonucleotides, lipoplexes, dendrimers, and inorganic nanoparticles. These therapeutics can be administered directly or through scaffold enrichment.
More recent approaches, such as those performed by companies such as Ligandal, offer the possibility of creating cell-specific targeting technologies for a variety of gene therapy modalities, including RNA, DNA and gene editing tools such as CRISPR. Other companies, such as Arbutus Biopharma and Arcturus Therapeutics, offer non-viral, non-cell-targeted approaches that mainly exhibit liver trophism. In more recent years, startups such as Sixfold Bio, GenEdit, and Spotlight Therapeutics have begun to solve the non-viral gene delivery problem. Non-viral techniques offer the possibility of repeat dosing and greater tailorability of genetic payloads, which in the future will be more likely to take over viral-based delivery systems.
Companies such as Editas Medicine, Intellia Therapeutics, CRISPR Therapeutics, Casebia, Cellectis, Precision Biosciences, bluebird bio, Excision BioTherapeutics, and Sangamo have developed non-viral gene editing techniques, however frequently still use viruses for delivering gene insertion material following genomic cleavage by guided nucleases. These companies focus on gene editing, and still face major delivery hurdles.
BioNTech, Moderna Therapeutics and CureVac focus on delivery of mRNA payloads, which are necessarily non-viral delivery problems.
Alnylam, Dicerna Pharmaceuticals, and Ionis Pharmaceuticals focus on delivery of siRNA (antisense oligonucleotides) for gene suppression, which also necessitate non-viral delivery systems.
In academic contexts, a number of laboratories are working on delivery of PEGylated particles, which form serum protein coronas and chiefly exhibit LDL receptor mediated uptake in cells in vivo.
Treatment
Cancer
There have been attempts to treat cancer using gene therapy. As of 2017, 65% of gene therapy trials were for cancer treatment.
Adenovirus vectors are useful for some cancer gene therapies because adenovirus can transiently insert genetic material into a cell without permanently altering the cell's nuclear genome. These vectors can be used to cause antigens to be added to cancers causing an immune response, or hinder angiogenesis by expressing certain proteins. An Adenovirus vector is used in the commercial products Gendicine and Oncorine. Another commercial product, Rexin G, uses a retrovirus-based vector and selectively binds to receptors that are more expressed in tumors.
One approach, suicide gene therapy, works by introducing genes encoding enzymes that will cause a cancer cell to die. Another approach is the use oncolytic viruses, such as Oncorine, which are viruses that selectively reproduce in cancerous cells leaving other cells unaffected.
mRNA has been suggested as a non-viral vector for cancer gene therapy that would temporarily change a cancerous cell's function to create antigens or kill the cancerous cells and there have been several trials.
Afamitresgene autoleucel, sold under the brand name Tecelra, is an autologous T cell immunotherapy used for the treatment of synovial sarcoma. It is a T cell receptor (TCR) gene therapy. It is the first FDA-approved engineered cell therapy for a solid tumor. It uses a self-inactivating lentiviral vector to express a T-cell receptor specific for MAGE-A4, a melanoma-associated antigen.
Genetic diseases
Gene therapy approaches to replace a faulty gene with a healthy gene have been proposed and are being studied for treating some genetic diseases. As of 2017, 11.1% of gene therapy clinical trials targeted monogenic diseases.
Diseases such as sickle cell disease that are caused by autosomal recessive disorders for which a person's normal phenotype or cell function may be restored in cells that have the disease by a normal copy of the gene that is mutated, may be a good candidate for gene therapy treatment. The risks and benefits related to gene therapy for sickle cell disease are not known.
Gene therapy has been used in the eye. The eye is especially suitable for adeno-associated virus vectors. Voretigene neparvovec is an approved gene therapy to treat Leber's hereditary optic neuropathy. alipogene tiparvovec, a treatment for pancreatitis caused by a genetic condition, and Zolgensma for the treatment of spinal muscular atrophy both use an adeno-associated virus vector.
Infectious diseases
As of 2017, 7% of genetic therapy trials targeted infectious diseases. 69.2% of trials targeted HIV, 11% hepatitis B or C, and 7.1% malaria.
List of gene therapies for treatment of disease
Some genetic therapies have been approved by the U.S. Food and Drug Administration (FDA), the European Medicines Agency (EMA), and for use in Russia and China.
Adverse effects, contraindications and hurdles for use
Some of the unsolved problems include:
Off-target effects – The possibility of unwanted, likely harmful, changes to the genome present a large barrier to the widespread implementation of this technology. Improvements to the specificity of gRNAs and Cas enzymes present viable solutions to this issue as well as the refinement of the delivery method of CRISPR. It is likely that different diseases will benefit from different delivery methods.
Short-lived nature – Before gene therapy can become a permanent cure for a condition, the therapeutic DNA introduced into target cells must remain functional and the cells containing the therapeutic DNA must be stable. Problems with integrating therapeutic DNA into the nuclear genome and the rapidly dividing nature of many cells prevent it from achieving long-term benefits. Patients require multiple treatments.
Immune response – Any time a foreign object is introduced into human tissues, the immune system is stimulated to attack the invader. Stimulating the immune system in a way that reduces gene therapy effectiveness is possible. The immune system's enhanced response to viruses that it has seen before reduces the effectiveness to repeated treatments.
Problems with viral vectors – Viral vectors carry the risks of toxicity, inflammatory responses, and gene control and targeting issues.
Multigene disorders – Some commonly occurring disorders, such as heart disease, high blood pressure, Alzheimer's disease, arthritis, and diabetes, are affected by variations in multiple genes, which complicate gene therapy.
Some therapies may breach the Weismann barrier (between soma and germ-line) protecting the testes, potentially modifying the germline, falling afoul of regulations in countries that prohibit the latter practice.
Insertional mutagenesis – If the DNA is integrated in a sensitive spot in the genome, for example in a tumor suppressor gene, the therapy could induce a tumor. This has occurred in clinical trials for X-linked severe combined immunodeficiency (X-SCID) patients, in which hematopoietic stem cells were transduced with a corrective transgene using a retrovirus, and this led to the development of T cell leukemia in 3 of 20 patients. One possible solution is to add a functional tumor suppressor gene to the DNA to be integrated. This may be problematic since the longer the DNA is, the harder it is to integrate into cell genomes. CRISPR technology allows researchers to make much more precise genome changes at exact locations.
Cost – alipogene tiparvovec (Glybera), for example, at a cost of $1.6 million per patient, was reported in 2013, to be the world's most expensive drug.
Deaths
Three patients' deaths have been reported in gene therapy trials, putting the field under close scrutiny. The first was that of Jesse Gelsinger, who died in 1999, because of immune rejection response. One X-SCID patient died of leukemia in 2003. In 2007, a rheumatoid arthritis patient died from an infection; the subsequent investigation concluded that the death was not related to gene therapy.
Regulations
Regulations covering genetic modification are part of general guidelines about human-involved biomedical research. There are no international treaties which are legally binding in this area, but there are recommendations for national laws from various bodies.
The Helsinki Declaration (Ethical Principles for Medical Research Involving Human Subjects) was amended by the World Medical Association's General Assembly in 2008. This document provides principles physicians and researchers must consider when involving humans as research subjects. The Statement on Gene Therapy Research initiated by the Human Genome Organization (HUGO) in 2001, provides a legal baseline for all countries. HUGO's document emphasizes human freedom and adherence to human rights, and offers recommendations for somatic gene therapy, including the importance of recognizing public concerns about such research.
United States
No federal legislation lays out protocols or restrictions about human genetic engineering. This subject is governed by overlapping regulations from local and federal agencies, including the Department of Health and Human Services, the FDA and NIH's Recombinant DNA Advisory Committee. Researchers seeking federal funds for an investigational new drug application, (commonly the case for somatic human genetic engineering,) must obey international and federal guidelines for the protection of human subjects.
NIH serves as the main gene therapy regulator for federally funded research. Privately funded research is advised to follow these regulations. NIH provides funding for research that develops or enhances genetic engineering techniques and to evaluate the ethics and quality in current research. The NIH maintains a mandatory registry of human genetic engineering research protocols that includes all federally funded projects.
An NIH advisory committee published a set of guidelines on gene manipulation. The guidelines discuss lab safety as well as human test subjects and various experimental types that involve genetic changes. Several sections specifically pertain to human genetic engineering, including Section III-C-1. This section describes required review processes and other aspects when seeking approval to begin clinical research involving genetic transfer into a human patient. The protocol for a gene therapy clinical trial must be approved by the NIH's Recombinant DNA Advisory Committee prior to any clinical trial beginning; this is different from any other kind of clinical trial.
As with other kinds of drugs, the FDA regulates the quality and safety of gene therapy products and supervises how these products are used clinically. Therapeutic alteration of the human genome falls under the same regulatory requirements as any other medical treatment. Research involving human subjects, such as clinical trials, must be reviewed and approved by the FDA and an Institutional Review Board.
Gene doping
Athletes may adopt gene therapy technologies to improve their performance. Gene doping is not known to occur, but multiple gene therapies may have such effects. Kayser et al. argue that gene doping could level the playing field if all athletes receive equal access. Critics claim that any therapeutic intervention for non-therapeutic/enhancement purposes compromises the ethical foundations of medicine and sports.
Genetic enhancement
Genetic engineering could be used to cure diseases, but also to change physical appearance, metabolism, and even improve physical capabilities and mental faculties such as memory and intelligence. Ethical claims about germline engineering include beliefs that every fetus has a right to remain genetically unmodified, that parents hold the right to genetically modify their offspring, and that every child has the right to be born free of preventable diseases. For parents, genetic engineering could be seen as another child enhancement technique to add to diet, exercise, education, training, cosmetics, and plastic surgery. Another theorist claims that moral concerns limit but do not prohibit germline engineering.
A 2020 issue of the journal Bioethics was devoted to moral issues surrounding germline genetic engineering in people.
Possible regulatory schemes include a complete ban, provision to everyone, or professional self-regulation. The American Medical Association's Council on Ethical and Judicial Affairs stated that "genetic interventions to enhance traits should be considered permissible only in severely restricted situations: (1) clear and meaningful benefits to the fetus or child; (2) no trade-off with other characteristics or traits; and (3) equal access to the genetic technology, irrespective of income or other socioeconomic characteristics."
As early in the history of biotechnology as 1990, there have been scientists opposed to attempts to modify the human germline using these new tools, and such concerns have continued as technology progressed. With the advent of new techniques like CRISPR, in March 2015 a group of scientists urged a worldwide moratorium on clinical use of gene editing technologies to edit the human genome in a way that can be inherited. In April 2015, researchers sparked controversy when they reported results of basic research to edit the DNA of non-viable human embryos using CRISPR. A committee of the American National Academy of Sciences and National Academy of Medicine gave qualified support to human genome editing in 2017 once answers have been found to safety and efficiency problems "but only for serious conditions under stringent oversight."
History
1970s and earlier
In 1972, Friedmann and Roblin authored a paper in Science titled "Gene therapy for human genetic disease?". Rogers (1970) was cited for proposing that exogenous good DNA be used to replace the defective DNA in those with genetic defects.
1980s
In 1984, a retrovirus vector system was designed that could efficiently insert foreign genes into mammalian chromosomes.
1990s
The first approved gene therapy clinical research in the US took place on 14 September 1990, at the National Institutes of Health (NIH), under the direction of William French Anderson. Four-year-old Ashanti DeSilva received treatment for a genetic defect that left her with adenosine deaminase deficiency (ADA-SCID), a severe immune system deficiency. The defective gene of the patient's blood cells was replaced by the functional variant. Ashanti's immune system was partially restored by the therapy. Production of the missing enzyme was temporarily stimulated, but the new cells with functional genes were not generated. She led a normal life only with the regular injections performed every two months. The effects were successful, but temporary.
Cancer gene therapy was introduced in 1992/93 (Trojan et al. 1993). The treatment of glioblastoma multiforme, the malignant brain tumor whose outcome is always fatal, was done using a vector expressing antisense IGF-I RNA (clinical trial approved by NIH protocol no.1602 24 November 1993, and by the FDA in 1994). This therapy also represents the beginning of cancer immunogene therapy, a treatment which proves to be effective due to the anti-tumor mechanism of IGF-I antisense, which is related to strong immune and apoptotic phenomena.
In 1992, Claudio Bordignon, working at the Vita-Salute San Raffaele University, performed the first gene therapy procedure using hematopoietic stem cells as vectors to deliver genes intended to correct hereditary diseases. In 2002, this work led to the publication of the first successful gene therapy treatment for ADA-SCID. The success of a multi-center trial for treating children with SCID (severe combined immune deficiency or "bubble boy" disease) from 2000 and 2002, was questioned when two of the ten children treated at the trial's Paris center developed a leukemia-like condition. Clinical trials were halted temporarily in 2002, but resumed after regulatory review of the protocol in the US, the United Kingdom, France, Italy, and Germany.
In 1993, Andrew Gobea was born with SCID following prenatal genetic screening. Blood was removed from his mother's placenta and umbilical cord immediately after birth, to acquire stem cells. The allele that codes for adenosine deaminase (ADA) was obtained and inserted into a retrovirus. Retroviruses and stem cells were mixed, after which the viruses inserted the gene into the stem cell chromosomes. Stem cells containing the working ADA gene were injected into Andrew's blood. Injections of the ADA enzyme were also given weekly. For four years T cells (white blood cells), produced by stem cells, made ADA enzymes using the ADA gene. After four years more treatment was needed.
In 1996, Luigi Naldini and Didier Trono developed a new class of gene therapy vectors based on HIV capable of infecting non-dividing cells that have since then been widely used in clinical and research settings, pioneering lentivirals vector in gene therapy.
Jesse Gelsinger's death in 1999 impeded gene therapy research in the US. As a result, the FDA suspended several clinical trials pending the reevaluation of ethical and procedural practices.
2000s
The modified gene therapy strategy of antisense IGF-I RNA (NIH n˚ 1602) using antisense / triple helix anti-IGF-I approach was registered in 2002, by Wiley gene therapy clinical trial - n˚ 635 and 636. The approach has shown promising results in the treatment of six different malignant tumors: glioblastoma, cancers of liver, colon, prostate, uterus, and ovary (Collaborative NATO Science Programme on Gene Therapy USA, France, Poland n˚ LST 980517 conducted by J. Trojan) (Trojan et al., 2012). This anti-gene antisense/triple helix therapy has proven to be efficient, due to the mechanism stopping simultaneously IGF-I expression on translation and transcription levels, strengthening anti-tumor immune and apoptotic phenomena.
2002
Sickle cell disease can be treated in mice. The mice – which have essentially the same defect that causes human cases – used a viral vector to induce production of fetal hemoglobin (HbF), which normally ceases to be produced shortly after birth. In humans, the use of hydroxyurea to stimulate the production of HbF temporarily alleviates sickle cell symptoms. The researchers demonstrated this treatment to be a more permanent means to increase therapeutic HbF production.
A new gene therapy approach repaired errors in messenger RNA derived from defective genes. This technique has the potential to treat thalassaemia, cystic fibrosis and some cancers.
Researchers created liposomes 25 nanometers across that can carry therapeutic DNA through pores in the nuclear membrane.
2003
In 2003, a research team inserted genes into the brain for the first time. They used liposomes coated in a polymer called polyethylene glycol, which unlike viral vectors, are small enough to cross the blood–brain barrier.
Short pieces of double-stranded RNA (short, interfering RNAs or siRNAs) are used by cells to degrade RNA of a particular sequence. If a siRNA is designed to match the RNA copied from a faulty gene, then the abnormal protein product of that gene will not be produced.
Gendicine is a cancer gene therapy that delivers the tumor suppressor gene p53 using an engineered adenovirus. In 2003, it was approved in China for the treatment of head and neck squamous cell carcinoma.
2006
In March, researchers announced the successful use of gene therapy to treat two adult patients for X-linked chronic granulomatous disease, a disease which affects myeloid cells and damages the immune system. The study is the first to show that gene therapy can treat the myeloid system.
In May, a team reported a way to prevent the immune system from rejecting a newly delivered gene. Similar to organ transplantation, gene therapy has been plagued by this problem. The immune system normally recognizes the new gene as foreign and rejects the cells carrying it. The research utilized a newly uncovered network of genes regulated by molecules known as microRNAs. This natural function selectively obscured their therapeutic gene in immune system cells and protected it from discovery. Mice infected with the gene containing an immune-cell microRNA target sequence did not reject the gene.
In August, scientists successfully treated metastatic melanoma in two patients using killer T cells genetically retargeted to attack the cancer cells.
In November, researchers reported on the use of VRX496, a gene-based immunotherapy for the treatment of HIV that uses a lentiviral vector to deliver an antisense gene against the HIV envelope. In a phase I clinical trial, five subjects with chronic HIV infection who had failed to respond to at least two antiretroviral regimens were treated. A single intravenous infusion of autologous CD4 T cells genetically modified with VRX496 was well tolerated. All patients had stable or decreased viral load; four of the five patients had stable or increased CD4 T cell counts. All five patients had stable or increased immune response to HIV antigens and other pathogens. This was the first evaluation of a lentiviral vector administered in a US human clinical trial.
2007
In May 2007, researchers announced the first gene therapy trial for inherited retinal disease. The first operation was carried out on a 23-year-old British male, Robert Johnson, in early 2007.
2008
Leber's congenital amaurosis is an inherited blinding disease caused by mutations in the RPE65 gene. The results of a small clinical trial in children were published in April. Delivery of recombinant adeno-associated virus (AAV) carrying RPE65 yielded positive results. In May, two more groups reported positive results in independent clinical trials using gene therapy to treat the condition. In all three clinical trials, patients recovered functional vision without apparent side-effects.
2009
In September researchers were able to give trichromatic vision to squirrel monkeys. In November 2009, researchers halted a fatal genetic disorder called adrenoleukodystrophy in two children using a lentivirus vector to deliver a functioning version of ABCD1, the gene that is mutated in the disorder.
2010s
2010
An April paper reported that gene therapy addressed achromatopsia (color blindness) in dogs by targeting cone photoreceptors. Cone function and day vision were restored for at least 33 months in two young specimens. The therapy was less efficient for older dogs.
In September it was announced that an 18-year-old male patient in France with beta thalassemia major had been successfully treated. Beta thalassemia major is an inherited blood disease in which beta haemoglobin is missing and patients are dependent on regular lifelong blood transfusions. The technique used a lentiviral vector to transduce the human β-globin gene into purified blood and marrow cells obtained from the patient in June 2007. The patient's haemoglobin levels were stable at 9 to 10 g/dL. About a third of the hemoglobin contained the form introduced by the viral vector and blood transfusions were not needed. Further clinical trials were planned. Bone marrow transplants are the only cure for thalassemia, but 75% of patients do not find a matching donor.
Cancer immunogene therapy using modified antigene, antisense/triple helix approach was introduced in South America in 2010/11 in La Sabana University, Bogota (Ethical Committee 14 December 2010, no P-004-10). Considering the ethical aspect of gene diagnostic and gene therapy targeting IGF-I, the IGF-I expressing tumors i.e. lung and epidermis cancers were treated (Trojan et al. 2016).
2011
In 2007 and 2008, a man (Timothy Ray Brown) was cured of HIV by repeated hematopoietic stem cell transplantation (see also allogeneic stem cell transplantation, allogeneic bone marrow transplantation, allotransplantation) with double-delta-32 mutation which disables the CCR5 receptor. This cure was accepted by the medical community in 2011. It required complete ablation of existing bone marrow, which is very debilitating.
In August two of three subjects of a pilot study were confirmed to have been cured from chronic lymphocytic leukemia (CLL). The therapy used genetically modified T cells to attack cells that expressed the CD19 protein to fight the disease. In 2013, the researchers announced that 26 of 59 patients had achieved complete remission and the original patient had remained tumor-free.
Human HGF plasmid DNA therapy of cardiomyocytes is being examined as a potential treatment for coronary artery disease as well as treatment for the damage that occurs to the heart after myocardial infarction.
In 2011, Neovasculgen was registered in Russia as the first-in-class gene-therapy drug for treatment of peripheral artery disease, including critical limb ischemia; it delivers the gene encoding for VEGF. Neovasculogen is a plasmid encoding the CMV promoter and the 165 amino acid form of VEGF.
2012
The FDA approved Phase I clinical trials on thalassemia major patients in the US for 10 participants in July. The study was expected to continue until 2015.
In July 2012, the European Medicines Agency recommended approval of a gene therapy treatment for the first time in either Europe or the United States. The treatment used Alipogene tiparvovec (Glybera) to compensate for lipoprotein lipase deficiency, which can cause severe pancreatitis. The recommendation was endorsed by the European Commission in November 2012, and commercial rollout began in late 2014. Alipogene tiparvovec was expected to cost around $1.6 million per treatment in 2012, revised to $1 million in 2015, making it the most expensive medicine in the world at the time. , only the patients treated in clinical trials and a patient who paid the full price for treatment have received the drug.
In December 2012, it was reported that 10 of 13 patients with multiple myeloma were in remission "or very close to it" three months after being injected with a treatment involving genetically engineered T cells to target proteins NY-ESO-1 and LAGE-1, which exist only on cancerous myeloma cells.
2013
In March researchers reported that three of five adult subjects who had acute lymphocytic leukemia (ALL) had been in remission for five months to two years after being treated with genetically modified T cells which attacked cells with CD19 genes on their surface, i.e. all B cells, cancerous or not. The researchers believed that the patients' immune systems would make normal T cells and B cells after a couple of months. They were also given bone marrow. One patient relapsed and died and one died of a blood clot unrelated to the disease.
Following encouraging Phase I trials, in April, researchers announced they were starting Phase II clinical trials (called CUPID2 and SERCA-LVAD) on 250 patients at several hospitals to combat heart disease. The therapy was designed to increase the levels of SERCA2, a protein in heart muscles, improving muscle function. The U.S. Food and Drug Administration (FDA) granted this a breakthrough therapy designation to accelerate the trial and approval process. In 2016, it was reported that no improvement was found from the CUPID 2 trial.
In July researchers reported promising results for six children with two severe hereditary diseases had been treated with a partially deactivated lentivirus to replace a faulty gene and after 7–32 months. Three of the children had metachromatic leukodystrophy, which causes children to lose cognitive and motor skills. The other children had Wiskott–Aldrich syndrome, which leaves them to open to infection, autoimmune diseases, and cancer. Follow up trials with gene therapy on another six children with Wiskott–Aldrich syndrome were also reported as promising.
In October researchers reported that two children born with adenosine deaminase severe combined immunodeficiency disease (ADA-SCID) had been treated with genetically engineered stem cells 18 months previously and that their immune systems were showing signs of full recovery. Another three children were making progress. In 2014, a further 18 children with ADA-SCID were cured by gene therapy. ADA-SCID children have no functioning immune system and are sometimes known as "bubble children".
Also in October researchers reported that they had treated six people with haemophilia in early 2011 using an adeno-associated virus. Over two years later all six were producing clotting factor.
2014
In January researchers reported that six choroideremia patients had been treated with adeno-associated virus with a copy of REP1. Over a six-month to two-year period all had improved their sight. By 2016, 32 patients had been treated with positive results and researchers were hopeful the treatment would be long-lasting. Choroideremia is an inherited genetic eye disease with no approved treatment, leading to loss of sight.
In March researchers reported that 12 HIV patients had been treated since 2009 in a trial with a genetically engineered virus with a rare mutation (CCR5 deficiency) known to protect against HIV with promising results.
Clinical trials of gene therapy for sickle cell disease were started in 2014.
In February LentiGlobin BB305, a gene therapy treatment undergoing clinical trials for treatment of beta thalassemia gained FDA "breakthrough" status after several patients were able to forgo the frequent blood transfusions usually required to treat the disease.
In March researchers delivered a recombinant gene encoding a broadly neutralizing antibody into monkeys infected with simian HIV; the monkeys' cells produced the antibody, which cleared them of HIV. The technique is named immunoprophylaxis by gene transfer (IGT). Animal tests for antibodies to ebola, malaria, influenza, and hepatitis were underway.
In March, scientists, including an inventor of CRISPR, Jennifer Doudna, urged a worldwide moratorium on germline gene therapy, writing "scientists should avoid even attempting, in lax jurisdictions, germline genome modification for clinical application in humans" until the full implications "are discussed among scientific and governmental organizations".
In December, scientists of major world academies called for a moratorium on inheritable human genome edits, including those related to CRISPR-Cas9 technologies but that basic research including embryo gene editing should continue.
2015
Researchers successfully treated a boy with epidermolysis bullosa using skin grafts grown from his own skin cells, genetically altered to repair the mutation that caused his disease.
In November, researchers announced that they had treated a baby girl, Layla Richards, with an experimental treatment using donor T cells genetically engineered using TALEN to attack cancer cells. One year after the treatment she was still free of her cancer (a highly aggressive form of acute lymphoblastic leukaemia [ALL]). Children with highly aggressive ALL normally have a very poor prognosis and Layla's disease had been regarded as terminal before the treatment.
2016
In April the Committee for Medicinal Products for Human Use of the European Medicines Agency endorsed a gene therapy treatment called Strimvelis and the European Commission approved it in June. This treats children born with adenosine deaminase deficiency and who have no functioning immune system. This was the second gene therapy treatment to be approved in Europe.
In October, Chinese scientists reported they had started a trial to genetically modify T cells from 10 adult patients with lung cancer and reinject the modified T cells back into their bodies to attack the cancer cells. The T cells had the PD-1 protein (which stops or slows the immune response) removed using CRISPR-Cas9.
A 2016 Cochrane systematic review looking at data from four trials on topical cystic fibrosis transmembrane conductance regulator (CFTR) gene therapy does not support its clinical use as a mist inhaled into the lungs to treat cystic fibrosis patients with lung infections. One of the four trials did find weak evidence that liposome-based CFTR gene transfer therapy may lead to a small respiratory improvement for people with CF. This weak evidence is not enough to make a clinical recommendation for routine CFTR gene therapy.
2017
In February Kite Pharma announced results from a clinical trial of CAR-T cells in around a hundred people with advanced non-Hodgkin lymphoma.
In March, French scientists reported on clinical research of gene therapy to treat sickle cell disease.
In August, the FDA approved tisagenlecleucel for acute lymphoblastic leukemia. Tisagenlecleucel is an adoptive cell transfer therapy for B-cell acute lymphoblastic leukemia; T cells from a person with cancer are removed, genetically engineered to make a specific T-cell receptor (a chimeric T cell receptor, or "CAR-T") that reacts to the cancer, and are administered back to the person. The T cells are engineered to target a protein called CD19 that is common on B cells. This is the first form of gene therapy to be approved in the United States. In October, a similar therapy called axicabtagene ciloleucel was approved for non-Hodgkin lymphoma.
In October, biophysicist and biohacker Josiah Zayner claimed to have performed the very first in-vivo human genome editing in the form of a self-administered therapy.
On 13 November, medical scientists working with Sangamo Therapeutics, headquartered in Richmond, California, announced the first ever in-body human gene editing therapy. The treatment, designed to permanently insert a healthy version of the flawed gene that causes Hunter syndrome, was given to 44-year-old Brian Madeux and is part of the world's first study to permanently edit DNA inside the human body. The success of the gene insertion was later confirmed. Clinical trials by Sangamo involving gene editing using zinc finger nuclease (ZFN) are ongoing.
In December the results of using an adeno-associated virus with blood clotting factor VIII to treat nine haemophilia A patients were published. Six of the seven patients on the high dose regime increased the level of the blood clotting VIII to normal levels. The low and medium dose regimes had no effect on the patient's blood clotting levels.
In December, the FDA approved voretigene neparvovec, the first in vivo gene therapy, for the treatment of blindness due to Leber's congenital amaurosis. The price of this treatment is for both eyes.
2019
In May, the FDA approved onasemnogene abeparvovec (Zolgensma) for treating spinal muscular atrophy in children under two years of age. The list price of Zolgensma was set at per dose, making it the most expensive drug ever.
In May, the EMA approved betibeglogene autotemcel (Zynteglo) for treating beta thalassemia for people twelve years of age and older.
In July, Allergan and Editas Medicine announced phase I/II clinical trial of AGN-151587 for the treatment of Leber congenital amaurosis 10. This is one of the first studies of a CRISPR-based in vivo human gene editing therapy, where the editing takes place inside the human body. The first injection of the CRISPR-Cas System was confirmed in March 2020.
Exagamglogene autotemcel, a CRISPR-based human gene editing therapy, was used for sickle cell and thalassemia in clinical trials.
2020s
2020
In May, onasemnogene abeparvovec (Zolgensma) was approved by the European Union for the treatment of spinal muscular atrophy in people who either have clinical symptoms of SMA type 1 or who have no more than three copies of the SMN2 gene, irrespective of body weight or age.
In August, Audentes Therapeutics reported that three out of 17 children with X-linked myotubular myopathy participating the clinical trial of a AAV8-based gene therapy treatment AT132 have died. It was suggested that the treatment, whose dosage is based on body weight, exerts a disproportionately toxic effect on heavier patients, since the three patients who died were heavier than the others. The trial has been put on clinical hold.
On 15 October, the Committee for Medicinal Products for Human Use (CHMP) of the European Medicines Agency (EMA) adopted a positive opinion, recommending the granting of a marketing authorisation for the medicinal product Libmeldy (autologous CD34+ cell enriched population that contains hematopoietic stem and progenitor cells transduced ex vivo using a lentiviral vector encoding the human arylsulfatase A gene), a gene therapy for the treatment of children with the "late infantile" (LI) or "early juvenile" (EJ) forms of metachromatic leukodystrophy (MLD). The active substance of Libmeldy consists of the child's own stem cells which have been modified to contain working copies of the ARSA gene. When the modified cells are injected back into the patient as a one-time infusion, the cells are expected to start producing the ARSA enzyme that breaks down the build-up of sulfatides in the nerve cells and other cells of the patient's body. Libmeldy was approved for medical use in the EU in December 2020.
On 15 October, Lysogene, a French biotechnological company, reported the death of a patient in who has received LYS-SAF302, an experimental gene therapy treatment for mucopolysaccharidosis type IIIA (Sanfilippo syndrome type A).
2021
In May, a new method using an altered version of HIV as a lentivirus vector was reported in the treatment of 50 children with ADA-SCID obtaining positive results in 48 of them, this method is expected to be safer than retroviruses vectors commonly used in previous studies of SCID where the development of leukemia was usually observed and had already been used in 2019, but in a smaller group with X-SCID.
In June a clinical trial on six patients affected with transthyretin amyloidosis reported a reduction the concentration of missfolded transthretin (TTR) protein in serum through CRISPR-based inactivation of the TTR gene in liver cells observing mean reductions of 52% and 87% among the lower and higher dose groups.This was done in vivo without taking cells out of the patient to edit them and reinfuse them later.
In July results of a small gene therapy phase I study was published reporting observation of dopamine restoration on seven patients between 4 and 9 years old affected by aromatic L-amino acid decarboxylase deficiency (AADC deficiency).
2022
In February, the first ever gene therapy for Tay–Sachs disease was announced, it uses an adeno-associated virus to deliver the correct instruction for the HEXA gene on brain cells which causes the disease. Only two children were part of a compassionate trial presenting improvements over the natural course of the disease and no vector-related adverse events.
In May, eladocagene exuparvovec is recommended for approval by the European Commission.
In July results of a gene therapy candidate for haemophilia B called FLT180 were announced, it works using an adeno-associated virus (AAV) to restore the clotting factor IX (FIX) protein, normal levels of the protein were observed with low doses of the therapy but immunosuppression was necessitated to decrease the risk of vector-related immune responses.
In December, a 13-year girl that had been diagnosed with T-cell acute lymphoblastic leukaemia was successfully treated at Great Ormond Street Hospital (GOSH) in the first documented use of therapeutic gene editing for this purpose, after undergoing six months of an experimental treatment, where all attempts of other treatments failed. The procedure included reprogramming a healthy T-cell to destroy the cancerous T-cells to first rid her of leukaemia, and then rebuilding her immune system using healthy immune cells. The GOSH team used BASE editing and had previously treated a case of acute lymphoblastic leukaemia in 2015 using TALENs.
2023
In May 2023, the FDA approved beremagene geperpavec for the treatment of wounds in people with dystrophic epidermolysis bullosa (DEB) which is applied as a topical gel that delivers a herpes-simplex virus type 1 (HSV-1) vector encoding the collagen type VII alpha 1 chain (COL7A1) gene that is dysfunctional on those affected by DEB . One trial found 65% of the Vyjuvek-treated wounds completely closed while only 26% of the placebo-treated at 24 weeks. It has been also reported its use as an eyedrop for a patient with DEB that had vision loss due to the widespread blistering with good results.
In June 2023, the FDA gave an accelerated approval to Elevidys for Duchenne muscular dystrophy (DMD) only for boys 4 to 5 years old as they are more likely to benefit from the therapy which consists of one-time intravenous infusion of a virus (AAV rh74 vector) that delivers a functioning "microdystrophin" gene (138 kDa) into the muscle cells to act in place of the normal dystrophin (427 kDa) that is found mutated in this disease.
In July 2023, it was reported that it had been developed a new method to affect genetic expressions through direct current.
In December 2023, two gene therapies were approved for sickle cell disease, exagamglogene autotemcel and lovotibeglogene autotemcel.
2024
In November 2024, FDA granted accelerated approval for eladocagene exuparvovec-tneq (Kebilidi, PTC Therapeutics), a direct-to-brain gene therapy for aromatic L-amino acid decarboxylase deficiency. It uses a recombinant adeno-associated virus serotype 2 (rAAV2) to deliver a functioning DOPA decarboxylase (DDC) gene directly into the putamen, increasing the AADC enzyme and restoring dopamine production. It is administered through a stereotactic surgical procedure.
List of gene therapies
Gene therapy for color blindness
Gene therapy for epilepsy
Gene therapy for osteoarthritis
Gene therapy in Parkinson's disease
Gene therapy of the human retina
List of gene therapies
| Technology | Biotechnology | null |
12903 | https://en.wikipedia.org/wiki/Gegenschein | Gegenschein | Gegenschein (; ; ) or counterglow is a faintly bright spot in the night sky centered at the antisolar point. The backscatter of sunlight by interplanetary dust causes this optical phenomenon, being a zodiacal light and part of its zodiacal light band.
Explanation
Like zodiacal light, gegenschein is sunlight scattered by interplanetary dust. Most of this dust orbits the Sun near the ecliptic plane, with a possible concentration of particles centered at the point of the Earth–Sun system.
Gegenschein is distinguished from zodiacal light by its high angle of reflection of the incident sunlight on the dust particles. It forms a slightly brighter elliptical spot of 8–10° across directly opposite the Sun within the dimmer band of zodiacal light and zodiac constellation. The intensity of the gegenschein is relatively enhanced because each dust particle is seen at full phase, having a difficult to measure apparent magnitude of +5 to +6, with a very low surface brightness in the +10 to +12 magnitude range.
History
It is commonly stated that the gegenschein was first described by the French Jesuit astronomer and professor (1692–1776) in 1730. Further observations were supposedly made by the German explorer Alexander von Humboldt during his South American journey from 1799 to 1803. It was Humboldt who first used the German term Gegenschein. However, research conducted in 2021 by Texas State University astronomer and professor Donald Olson discovered that the Danish astronomer Theodor Brorsen was actually the first person to observe and describe one in 1854, although Brorsen had thought that Pézenas had observed it first. Olson believes what Pézenas actually observed was an auroral event, as he described the phenomenon as having a red glow; Olson found many other reports of auroral activity from around Europe and Asia on the same date Pézenas made his observation. Humboldt's report instead described glowing triangular patches on both the western and eastern horizons shortly after sunset, while true gegenschein is most visible near local midnight when it is highest in the sky.
Brorsen published the first thorough investigations of the gegenschein in 1854. T. W. Backhouse discovered it independently in 1876, as did Edward Emerson Barnard in 1882. In modern times, the gegenschein is not visible in most inhabited regions of the world due to light pollution.
| Physical sciences | Solar System | Astronomy |
2249912 | https://en.wikipedia.org/wiki/Rio%20de%20Janeiro%20Metro | Rio de Janeiro Metro | The Rio de Janeiro Metro (, ), commonly referred to as just the Metrô () is a rapid transit network that serves the city of Rio de Janeiro, Brazil. The Metrô was inaugurated on 5 March 1979, and consisted of five stations operating on a single line. The system currently covers a total of , serving 41 stations, divided into three lines: Line 1 (); Line 2 (), which together travel over a shared stretch of line that covers 10 stations of an approximate distance of ; and Line 4 (). Metrô Rio has the second highest passenger volume of the metro systems in Brazil, after the São Paulo Metro.
Line 1 (orange line) serves downtown Rio, tourist areas in the South Zone, and several neighbourhoods in the North Zone. It is a semicircular line, and is fully underground. It runs from Uruguai Station to Ipanema/General Osório Station. Line 2 (green line) serves working-class residential neighborhoods extending toward the north. It is a northwest-to-southeast line, and almost completely above-ground (mostly at grade and partly elevated). This line started as a light rail, but due to increasing numbers of commuters, it gradually changed to rapid transit or metro. Because of its origin as light rail, it is at grade except for Estácio Station (the former connection station between lines 1 and 2), which is underground and Cidade Nova Station, which is elevated, and Line 4 (yellow line), connecting Barra da Tijuca/Jardim Oceânico Station in the West Zone to Ipanema/General Osório Station on Line 1.
The Government of the State of Rio de Janeiro remains responsible for the expansion of the metro network through Rio Trilhos. In late December 2007, the lease was renewed until 2038 and Metrô Rio assumed responsibility for the construction of Cidade Nova Station, which serves as a link between Line 2 and Line 1 ending the need to transfer stations, with the purchase of 114 cars, and construction of Uruguai Station, extending Line 1 further north.
The extension works of Line 2, called Line 1A, which ended the need for a transfer at Estácio Station and allowed the direct connection from Pavuna Station to Botafogo were started by Metrô Rio on 13 November 2008, and the tracks were completed in December 2009. With the extension, the 250 thousand passengers that circulate daily on Line 2 do not need to change trains any more in order to get to the South Zone. The interconnection of the two metro lines will reduce, by up to 13 minutes, the journey time from Pavuna station to the city's downtown, the destination of 83% of Line 2's passengers.
History
Rio de Janeiro is the second largest city in Brazil and the most popular tourist attraction in the country. After 1950, the number of motor vehicles on the roads increased dramatically. Rio de Janeiro lies in a hilly region, between the mountains and the Atlantic Ocean. The landscape of the city is extremely uneven, making travelling by car or bus a very time-consuming task through the narrow streets. These conditions are ideal for trams but not for the increasing traffic of motor vehicles. By the early 1960s, traffic jams, pollution, and smog had become a serious problem in the city. To overcome these problems, local transport authorities decided to reduce the tram network and switch over to a metro network.
On 14 December 1968, the Companhia do Metropolitano do Rio de Janeiro (Metro Company of Rio de Janeiro in English) was created through State Law number 1736. In March 1975, with Law–Decree number 25, the company effectively came into existence. On 23 June 1970, construction work started in Jardim da Glória. From 1971 to 1974, owing to a lack of resources, construction work stopped and was only resumed a year later. The Rio de Janeiro Metro began operating in March 1979, during the administration of governor Chagas Freitas. In the beginning, there were only five stations: Praça Onze, Central, Presidente Vargas, Cinelândia Station, and Glória Station, operating from 9:00 AM to 3:00 PM.
In its initial 10 days, the system transported more than half a million people, averaging sixty thousand passengers per day. At that time, the subway worked with only four trains of four cars each, with an average interval of eight minutes. In December of the same year, the operating schedule was extended until 11:00 PM, including Saturdays. In 1980, the metro system began to be expanded with the opening of Uruguaiana Station and Estácio stations. The two new stations caused larger passenger demand, compelling an increase in the number of trains from four to six.
The Carioca station in Downtown Rio de Janeiro, the busiest station with more than eighty thousand passengers a day, was finished in January 1981. By the end of the same year, the stations Catete Station, Morro Azul (now called Flamengo Station), and Botafogo Station were completed. In November 1981, Line 2 (or Linha 2 in Portuguese) started operating with only two stations: São Cristóvão and Maracanã Station (which serves the Maracanã football stadium). In December, completing the southern section of the first Line 1, Largo do Machado Station began service. In 1982, the complementary inaugurations of the northern section of Line 1 started, with the beginning of operations of the Afonso Pena, São Francisco Xavier and Saens Peña stations.
To allow the completion of the second line to Irajá, in 1983, the trains on this line began operating from 6:00 AM to 2:00 PM. After a month, this schedule was extended until 8 PM, and a free bus service was established, integrating the Estácio, São Cristóvão, and Maracanã stations. After the conclusion of the works, the Pre-Metro and Maria da Graça, Del Castilho, Inhaúma Station and Irajá Station stations were opened. In 1984 the commercial operation of the second line began with five trains on workdays with a five-and-a-half-minute interval during the week.
Following the expansion, the Triagem station was inaugurated in July 1988, the year of the creation of the subway/train integration ticket. In 1991, the Engenho da Rainha station was inaugurated. From 1991 to 1996, two stations were opened, Thomaz Coelho and Vicente de Carvalho. In this period, the time interval of the nine stations of the second line was reduced to six minutes. In July 1998, Cardeal Arcoverde Station, in the traditional neighbourhood of Copacabana, was inaugurated. Five more stations became operational in the following two months: Irajá Station, Colégio Station, Coelho Neto, Engenheiro Rubens Paiva, Acari/Fazenda Botafogo and Pavuna Station.
In 1997, the Carnival Operation (Operação de Carnaval in Portuguese) began with continuous service during the Rio Carnival festivity days. In December of that year the system was privatised and the management and operation of the company passed into the hands of the Consortium Opportrans with a concession of 20 years, leaving the responsibility for expansion of the network in the hands of the state government of Rio de Janeiro through the company Rio Trilhos. The Rio Reveillón (New Year's Eve celebrations) is highlighted by the performance of Opportrans that since 1999 has conducted a Special Operation to ensure a party for all. Tickets illustrated scheduled appointments to avoid overcrowding and provide the best service.
In 2003 Siqueira Campos Station in Copacabana was inaugurated. Cantagalo Station beyond Siqueira Campos was due to be completed in March 2006 but owing to financial problems the opening date was postponed to 15 December. This was again postponed and the final opening took place in February 2007. At the same time construction began on the subway extension to General Osório station in Ipanema. This was opened in December 2009.
In late December 2007, Metro Rio renewed the concession, then defined as for another 20 years, to 2038.
Line 1A from Pavuna to Botafogo opened in December 2009 with a connection between São Cristóvão and Central. Passenger traffic at Estácio is reduced and the elimination of the need to transfer between Lines 1 and 2 saves up to 13 minutes of journey time. A new station on the new section, Cidade Nova, was opened in November 2010; the station is on Avenida Presidente Vargas and serves the City Hall.
In June 2010, the construction of Line 4 began, linking Ipanema to Barra da Tijuca, where most events of the 2016 Olympic Games occurred.
System
Rolling stock
The cars are of monoblock construction in stainless steel. Passenger train composition normally use six cars (four on rare occasions), but Line 2 was planned to use eight cars. Older stock driving cars can accommodate a maximum of 351 passengers (40 seated), while non-driving cars accommodate a maximum of 378 passengers (48 seated). Thus, in six-car configurations the maximum number of passengers that can be transported is 2,214.
Line 1 is served by exclusively old types of rolling stock, which are full metro. Since Line 2 was formerly a light rail line, there are some old types of stock that have been converted from light rail to metro stock. New B type stock is full metro stock. This line was initially served by old A type stock, built by La Brugeoise et Nivelles and Cobrasma.
Inside each coach, seat arrangement is both parallel and perpendicular to the windows. When the left side has parallel seats, the right side has perpendicular seats, and vice versa. Each vertical seat has a handle for easier standing. There are vertical stanchions from ceiling to floor for standing passengers, one set in front of the horizontal seats, another set at the middle of the coach. Both A and B type trains are air-conditioned.
Lines 1, 2 and 4 share EMUs built by CRRC Changchun Railway Vehicles Co. Ltd. The 6-car trains were designed in 18 months and all 19 sets are currently operating in passenger service. The trains entered revenue service 23 months after contract award.
Lines
Line 1 (Orange): Uruguai, Saens Peña, São Francisco Xavier, Afonso Pena, Estácio, Praça Onze, Central, Presidente Vargas, Uruguaiana, Carioca, Cinelândia, Glória, Catete, Largo do Machado, Flamengo, Botafogo, Cardeal Arcoverde, Siqueira Campos, Cantagalo, General Osório.
All stations are underground. Cinelândia and Central stations have island platforms. Carioca, Saens Peña, Botafogo and General Osório stations have both side and island platforms, although Saens Peña consists of two island platforms and three tracks. The northernmost of the three tracks appears to be disused and planned for use after the Line 1 extension. Saens Peña is a very busy station, with train turnarounds made very quickly. All other stations have side platforms, up and down tracks are divided by a low wall at stations with side platforms. Siqueira Campos, Carioca, Central, Uruguaiana are Cardeal Arcoverde have a large mezzanine floor between surface and underground tracks.
Central, which is a major interchange point between the Metro, local and longer-distance bus lines, and the SuperVia train network, is the busiest station on the network. The Cardeal Arcoverde station was dynamited out of the base of São João Mountain and retains a cavelike structure. General Osório has some painting in the hallways to remember prehistoric attempts at communication.
Uruguai Station opened in March 2014, becoming the new terminal station of Line 1 in the North Side of Rio de Janeiro.
Line 2 (Green): Pavuna, Engenheiro Rubens Paiva, Acari/Fazenda Botafogo Station, Coelho Neto, Colégio, Irajá, Vicente de Carvalho, Thomaz Coelho, Engenho da Rainha, Inhaúma, Nova América/Del Castilho, Maria da Graça, Triagem, Maracanã, São Cristóvão, Cidade Nova, Estácio (closed on working days).
Line 1A: São Cristóvão, Cidade Nova, Central, Presidente Vargas, Uruguaiana, Carioca, Cinelândia, Glória, Catete, Largo do Machado, Flamengo, Botafogo.
Line 1A is actually an extension of Line 2 to Botafogo station. Line 2 is elevated from Irajá to Colégio. Many of the stations have island platforms, although Pavuna has both side and island platforms. Underground from Central to Botafogo.
Owing to its origin as light rail, it is fully above-ground (except Estácio station, which is underground). Most stations like Irajá and others, have an island platform, whereas some stations like Triagem have side platforms. Maracanã station is directly linked by an overbridge to the Maracanã Stadium across the street.
Connections
Line 1 is fully underground with Cardeal Arcoverde being the deepest station. This station is under São João mountain. Non-free interchange with the Santa Teresa Tram is possible at Carioca and with the SuperVia trains at Central. Interchange to Line 2 is possible at all stations between Botafogo and Central on weekdays. There is interchange with Line 2 at Estácio on weekends and holidays. Interchange to bus is possible at Cardeal Arcoverde, Botafogo, Largo do Machado, Estácio, São Francisco Xavier and Sáenz Peña.
Line 2 is fully above-ground, except stations on Line 1A. It is elevated from Irajá to Colégio and the rest is at grade, except Cidade Nova and Triagem, which are elevated. Interchange with the train is possible at Triagem, Pavuna, São Cristóvão and Central. Interchange to line 1 is possible at Line 1A stations on weekdays, and at Estácio on weekends and holidays. Bus interchange is possible at Nova América/Del Castilho, Coelho Neto and Pavuna.
Fare structure
Single Journey (Unitário in Portuguese): This is the most popular option. When a commuter buys a ticket from the counter, they then can travel by metro from any station to any station of any line. Once the commuter leaves the station, they need another ticket for another trip. There is a flat single fare (Unitário) R$ 4.60 as of March 2020 regardless of distance.
Single Journey with bus extension (Metrô na Superfície in Portuguese): Metro Rio operates a bus service from some of its stations, which acts as an extension of the metro service. No additional fee is charged for this service; however, when buying the ticket the traveller must ask for a Metrô na Superfície card. Cards can be bought directly on the Metrô na Superfície bus.
Single Journey with express bus service: not to be confused with the bus extension, this fare allows a passenger to travel on the subway and on select bus services (Cosme Velho and Urca are served by express buses), also run by the metro company. The fare costs R$4.35 as of April 2014, and the ticket can be purchased on the subway or when boarding select buses.
Prepaid Card (Cartão Pré-pago): a prepaid card, valid on the metro and on the buses run by the metro company (not valid on regular city buses) can be bought at any metro station. The card is free of charge, however a minimum prepayment of R$10 is required. Tickets (a disposable contactless card) are purchased from a cashier in a booth. Prepaid tickets can be topped up at the vending booths, or at automated ticket recharging machines at select stations. Cards cannot be bought at the machines, and no change is given. Cash is the only accepted means of payment on any of the sales channels.
The Barra Expresso included a single ticket pass and the fare for a bus trip to Barra da Tijuca, a neighborhood located in the West Side of Rio. This integration ended when Line 4 was opened to the public.
Modernization
The investment of R$1.15 billion included also the purchase of 19 additional compositions; 114 new cars with a technology that allows the passengers to circulate inside the train. The first of the new compositions was scheduled to arrive in December 2010 and the others to start operating gradually before December 2011. These cars were intended for use on Line 2 and have a dimensioned air conditioning system to bear the sun and heat's direct incidence, as most part of the line is in the surface. With the increase of 63% of the fleet, the concessionaire also planned to standardize the compositions of Lines 1 and 2: all 49 trains will have six cars.
The control, signalization, ventilation and energy systems will be also expanded and modernized. The energy supply for the metro's operation will be reinforced with two new proper sub-stations, at Uruguaiana and Largo do Machado Stations, and with the remodeling of São Cristóvão and Central sub-stations. On the other hand, the signalization will be automated in the two lines. Metrô Rio will enhance the ventilation at the stations and will modernize all equipment of the Control and Operations Center, from where the complete daily operation is monitored. These actions, combined with the extension of Line 2, will allow Metrô Rio to transport more than 1.1 million passengers/day.
Expansion
Line 4 (yellow line) was completed on 30 July 2016, connecting Barra da Tijuca neighbourhood in the West Zone, passing under São Conrado and Rocinha, to Ipanema/General Osório Station. All stations are underground, but when arriving in Barra da Tijuca, trains exit a tunnel, pass briefly by an elevated bridge and go underground again.
Line 3 is proposed to run from Carioca to Visconde de Itaboraí via a tunnel underneath the Guanabara Bay, and via Araribóia, Antonina, Guaxindiba and Itambi stations.
Line 5 will run from Carioca, which interchanges with Lines 1, 2 and 3, to Gaveá interchanging for Line 4.
Network map
In popular culture
In literature
The collection of narratives Entre Linhas: Histórias do Metrô e Trem do Rio de Janeiro (2023), by Sofia Neves, delves into the stories of anonymous passengers on the Rio de Janeiro subway.
| Technology | Brazil | null |
2250081 | https://en.wikipedia.org/wiki/Vital%20signs | Vital signs | Vital signs (also known as vitals) are a group of the four to six most crucial medical signs that indicate the status of the body's vital (life-sustaining) functions. These measurements are taken to help assess the general physical health of a person, give clues to possible diseases, and show progress toward recovery. The normal ranges for a person's vital signs vary with age, weight, sex, and overall health.
There are four primary vital signs: body temperature, blood pressure, pulse (heart rate), and breathing rate (respiratory rate), often notated as BT, BP, HR, and RR. However, depending on the clinical setting, the vital signs may include other measurements called the "fifth vital sign" or "sixth vital sign."
Early warning scores have been proposed that combine the individual values of vital signs into a single score. This was done in recognition that deteriorating vital signs often precede cardiac arrest and/or admission to the intensive care unit. Used appropriately, a rapid response team can assess and treat a deteriorating patient and prevent adverse outcomes.
Primary vital signs
There are four primary vital signs which are standard in most medical settings:
Body temperature
Heart rate or Pulse
Respiratory rate
Blood pressure
The equipment needed is a thermometer, a sphygmomanometer, and a watch. Although a pulse can be taken by hand, a stethoscope may be required for a clinician to take a patient's apical pulse.
Temperature
Temperature recording gives an indication of core body temperature, which is normally tightly controlled (thermoregulation), as it affects the rate of chemical reactions. Body temperature is maintained through a balance of the heat produced by the body and the heat lost from the body.
Temperature can be recorded in order to establish a baseline for the individual's normal body temperature for the site and measuring conditions.
Temperature can be measured from the mouth, rectum, axilla (armpit), ear, or skin. Oral, rectal, and axillary temperature can be measured with either a glass or electronic thermometer. Note that rectal temperature measures approximately 0.5 °C higher than oral temperature, and axillary temperature approximately 0.5 °C less than oral temperature. Aural and skin temperature measurements require special devices designed to measure temperature from these locations.
While is considered "normal" body temperature, there is some variance between individuals. Most have a normal body temperature set point that falls within the range of .
The main reason for checking body temperature is to solicit any signs of systemic infection or inflammation in the presence of a fever. Fever is considered temperature of or above. Other causes of elevated temperature include hyperthermia, which results from unregulated heat generation or abnormalities in the body's heat exchange mechanisms.
Temperature depression (hypothermia) also needs to be evaluated. Hypothermia is classified as temperature below .
It is also recommended to review the trend of the patient's temperature over time. A fever of 38 °C does not necessarily indicate an ominous sign if the patient's previous temperature has been higher.
Pulse
The pulse is the rate at which the heart beats while pumping blood through the arteries, recorded as beats per minute (bpm). It may also be called "heart rate". In addition to providing the heart rate, the pulse should also be evaluated for strength and obvious rhythm abnormalities. The pulse is commonly taken at the wrist (radial artery). Alternative sites include the elbow (brachial artery), the neck (carotid artery), behind the knee (popliteal artery), or in the foot (dorsalis pedis or posterior tibial arteries). The pulse is taken with the index finger and middle finger by pushing with firm yet gentle pressure at the locations described above, and counting the beats felt per 60 seconds (or per 30 seconds and multiplying by two). The pulse rate can also be measured by listening directly to the heartbeat using a stethoscope. The pulse may vary due to exercise, fitness level, disease, emotions, and medications. The pulse also varies with age. A newborn can have a heart rate of 100–160 bpm, an infant (0–5 months old) a heart rate of 90–150 bpm, and a toddler (6–12 months old) a heart rate of 80–140 bpm. A child aged 1–3 years old can have a heart rate of 80–130 bpm, a child aged 3–5 years old a heart rate of 80–120 bpm, an older child (age of 6–10) a heart rate of 70–110 bpm, and an adolescent (age 11–14) a heart rate of 60–105 bpm. An adult (age 15+) can have a heart rate of 60–100 bpm.
Respiratory rate
Average respiratory rates vary between ages, but the normal reference range for people age 18 to 65 is 16–20 breaths per minute. The value of respiratory rate as an indicator of potential respiratory dysfunction has been investigated but findings suggest it is of limited value. Respiratory rate is a clear indicator of acidotic states, as the main function of respiration is removal of CO2 leaving bicarbonate base in circulation.
Blood pressure
Blood pressure is recorded as two readings: a higher systolic pressure, which occurs during the maximal contraction of the heart, and the lower diastolic or resting pressure. In adults, a normal blood pressure is 120/80, with 120 being the systolic and 80 being the diastolic reading. Usually, the blood pressure is read from the left arm unless there is some damage to the arm. The difference between the systolic and diastolic pressure is called the pulse pressure. The measurement of these pressures is now usually done with an aneroid or electronic sphygmomanometer. The classic measurement device is a mercury sphygmomanometer, using a column of mercury measured off in millimeters. In the United States and UK, the common form is millimeters of mercury, while elsewhere SI units of pressure are used. There is no natural 'normal' value for blood pressure, but rather a range of values that on increasing are associated with increased risks. The guideline acceptable reading also takes into account other co-factors for disease. Therefore, elevated blood pressure (hypertension) is variously defined when the systolic number is persistently over 140–160 mmHg. Low blood pressure is hypotension. Blood pressures are also taken at other portions of the extremities. These pressures are called segmental blood pressures and are used to evaluate blockage or arterial occlusion in a limb (see Ankle brachial pressure index).
Other signs
In the U.S., in addition to the above four, many providers are required or encouraged by government technology-in-medicine laws to record the patient's height, weight, and body mass index. In contrast to the traditional vital signs, these measurements are not useful for assessing acute changes in state because of the rate at which they change; however, they are useful for assessing the impact of prolonged illness or chronic health problems.
The definition of vital signs may also vary with the setting of the assessment. Emergency medical technicians (EMTs), in particular, are taught to measure the vital signs of respiration, pulse, skin, pupils, and blood pressure as "the 5 vital signs" in a non-hospital setting.
Fifth vital signs
The "fifth vital sign" may refer to a few different parameters.
Pain is considered a standard fifth vital sign in some organizations, such as the U.S. Veterans Affairs. Pain is measured on a 0–10 pain scale based on subjective patient reporting and may be unreliable. Some studies show that recording pain routinely may not change management.
Menstrual cycle
Oxygen saturation (as measured by pulse oximetry)
Blood glucose level
Sixth vital signs
There is no standard "sixth vital sign"; its use is more informal and discipline-dependent.
End-tidal
Functional status
Shortness of breath
Gait speed
Delirium
Variations by age
Children and infants have respiratory and heart rates that are faster than those of adults as shown in the following table :
Monitoring
Monitoring of vital parameters most commonly includes at least blood pressure and heart rate, and preferably also pulse oximetry and respiratory rate. Multimodal monitors that simultaneously measure and display the relevant vital parameters are commonly integrated into the bedside monitors in intensive care units, and the anesthetic machines in operating rooms. These allow for continuous monitoring of a patient, with medical staff being continuously informed of the changes in the general condition of a patient.
While monitoring has traditionally been done by nurses and doctors, a number of companies are developing devices that can be used by consumers themselves. These include Cherish Health, Scanadu and Azoi.
| Biology and health sciences | Diagnostics | Health |
2252691 | https://en.wikipedia.org/wiki/Dorudon | Dorudon | Dorudon ("spear-tooth") is a genus of extinct basilosaurid ancient whales that lived alongside Basilosaurus 41.03 to 33.9 million years ago in the Eocene. It was a small whale, with D. atrox measuring long and weighing . Dorudon lived in warm seas around the world and fed on small fish and mollusks. Fossils have been found along the former shorelines of the Tethys Sea in present-day Egypt and Pakistan, as well as in the United States, New Zealand and Western Sahara.
Taxonomic history
described Dorudon serratus based on a fragmentary maxilla and a few teeth found in South Carolina. He concluded that the teeth must have belonged to a mammal since they were two-rooted, that they must have been teeth from a juvenile since they were hollow, and also noted their similarity to the teeth then described for Zeuglodon (Basilosaurus). When exploring the type locality, Gibbes discovered a lower jaw and twelve caudal vertebrae, which he felt obliged to assign to Zeuglodon together with his original material. Gibbes concluded that Dorudon were juvenile Zeuglodon and consequently withdrew his new genus. He did, however, allow Louis Agassiz at Harvard to examine his specimens, and the Swiss professor replied that these were neither teeth of a juvenile nor those of Zeuglodon, but of a separate genus just as Gibbes had first proposed.
described Prozeuglodon atrox (="Proto-Basilosaurus") based on a nearly complete skull, a dentary and three associated vertebrae presented to him by the Geological Museum of Cairo. , however, realized that Andrews' specimen was a juvenile, and, he assumed, the same species as Zeuglodon isis, described by Andrews 1906. Kellogg also realized that the generic name Zeuglodon was invalid and therefore recombined it Prozeuglodon isis. Since then many specimens have been referred to Prozeuglodon atrox, including virtually every part of the skeleton, and it has become obvious that it is a separate genus, not a juvenile "Proto-Zeuglodon". Kellogg placed several of the species of Zeuglodon described from Egypt in the early 20th century (including Z. osiris, Z. zitteli, Z. elliotsmithii and Z. sensitivius) in the genus Dorudon. synonymized these four species and grouped them as Saghacetus osiris.
The current taxonomic status of Dorudon is based on 's revision of Dorudon and detailed description of D. atrox. Before this, the taxonomy of Dorudon was in disarray and based on a limited set of specimens.
D. atrox is known from Egypt, D. serratus from Georgia and South Carolina in the United States. The type species D. serratus was, and still is, based solely on two partial maxillae with a few teeth, cranial fragments, and a dozen vertebrae with some additional material, collected but not described by Gibbes, and referred to the type species. Before Uhen 2004, D. atrox was based solely on Andrews holotype skull, lower jaw, and the vertebrae he referred to it, but is now the best-known archaeocete species.
The two species of Dorudon differ from other members of Dorudontinae mainly in size: they are considerably larger than Saghacetus and slightly larger than Zygorhiza, but also differ from both these genera in dental and/or cranial morphology. The limited known material for D. serratus makes it difficult to compare the two species of Dorudon. placed D. atrox in the same genus as D. serratus because of similarities in size and morphology, but kept them as separate species because of differences in dental morphology. Even though D. serratus is the type species, the description of Dorudon is largely based on D. atrox because of its completeness. The cranial morphology of D. atrox makes it distinct from all other archaeocetes.
Description
Dorudon was a medium-sized whale, with D. atrox reaching in length and in body mass. Dorudontines were originally believed to be juvenile individuals of Basilosaurus as their fossils are similar but smaller. They have since been shown to be a different genus with the discovery of Dorudon juveniles. Although they look very much like modern whales, basilosaurines and dorudontines lacked the melon organ that would allow their descendants to use echolocation as effectively as modern whales. Like other basilosaurids, their nostrils were midway from the snout to the top of the head.
Dentition
The dental formula for Dorudon atrox is .
Typical for cetaceans, the upper incisors are aligned with the cheek teeth, and, except the small I1, separated by large diastemata containing pits into which the lower incisors fit. The upper incisors are simple conical teeth with a single root, lacking accessory denticles, and difficult to distinguish from lower incisors. The upper incisors are missing in most specimens and are only known from two specimens. The upper canine is a little larger than the upper incisors, and, like them, directed slightly buccally and mesially.
P1, only preserved in a single specimen, is the only single-rooted upper premolar. Apparently, P1 is conical, smaller than the remaining premolars and lacks accessory denticles. P2 is the largest upper tooth and the first in the upper row with large accessory denticles. Like the more posterior premolars, it is buccolingually compressed and double-rooted. It has a dominant central protocone flanked by denticles that decrease in size mesially and distally, resulting in a tooth with a triangular profile. P3 is similar to but slightly smaller than P2, except that it has a projection on the lingual side which is the remnant of a third root. In P4, smaller than P2–3, the larger distal root is formed by the fusion of two roots.
The upper molars extend onto the zygomatic arch and are considerably smaller than their neighbouring premolars. Like P4, their distal root is wider than the mesial and formed by the fusion of two roots. The profiles of the molars are more rounded than those of the premolars.
Similar to the upper incisors, the lower incisors are simple conical teeth curved distally and aligned with the cheek teeth. I1, the smallest tooth, is sitting on the anteriormost portion of the dentary, with its alveolus left open towards the mandibular symphysis and located as close to the alveolus of I2 as it can. I2, I3 and C1 are very similar, considerably larger than I1.
The lower premolars are double-rooted, buccolingually compressed teeth, except the deciduous P1 which is single-rooted. P3 is the second-largest cheek tooth, P4 the largest; both are very similar, dominated by the central cusp.
In the lower molars, the accessory denticles on the mesial edges are replaced by a deep groove called the reentrant groove. The apical cusp is the primitive protoconid. M2 and M3 are morphologically very similar. M3 is sitting high on the ascending mandibular ramus.
Paleoecology
Dorudon calves may have fallen prey to hungry Basilosaurus, as shown by unhealed bite marks on the skulls of some juvenile Dorudon.
| Biology and health sciences | Cetaceans | Animals |
12009271 | https://en.wikipedia.org/wiki/Turrid | Turrid | Turrid, plural turrids, is a common name for a very large group of predatory sea snails, marine gastropod mollusks which until recently were all classified in the family Turridae. However, recently the family was discovered to be polyphyletic and therefore was split into a number of families.
The original family Turridae used to contain more than 4,000 species. the Turridae (sensu Powell 1966) It was the largest mollusk family and the largest group of marine caenogastropods. There were approximately 27,000 described scientific names (accepted names plus synonyms) within the family Turridae. Turrids constituted more than half of the predatory species of gastropods in some parts of the world (Taylor et al. 1980). However, this very large family was shown to be polyphyletic, and in 2011 it was divided into 13 separate families by Bouchet, Kantor, Sysoev and Puilandre.
The single most complete collection of turrids in museums worldwide is in the Academy of Natural Sciences of Philadelphia malacology collection; this is because of specialized collecting by the American malacologist Virginia Orr Maes (1920-1986).
Distribution
Turrids are found worldwide in every sea and ocean from both poles to the tropics. They occur from the low-intertidal zone to depths of more than eight thousand metres (e.g., Xanthodaphne levis Sysoev, 1988, collected between 7974–8006 m, in the Bougainville Trench). However, most species of turrids are found in the neritic zone.
Shell description
Most turrids are rather small, with a height under 2 cm, but the adult shells of different species are between 0.3 and 11.4 cm in height.
The shape of the shells is more or less fusiform, varying from very high-spired to broadly ovate. The whorls are elongate to broadly conical.
The sculpture is very variable in form, but most have axial sculpture or spiral sculpture (or a combination of both). Others may be reticulate, beaded, nodulose, or striate.
The aperture of the shell very often has a V-shaped sinus or notch, an indentation on the upper end of the outer lip. This accommodates the anal siphonal notch, commonly known as the "turrid notch". The siphonal canal is usually open, varying from short and stocky to long and slender. The position of the turrid notch of the shell and the form and sculpture of the whorls have traditionally been the primary methods of classifying the turrids.
The columella is usually smooth and only seldom shows labial plicae. The operculum is horny, but is not always present.
Turrids are carnivorous, predatory gastropods. Most species have a poison gland used with the toxoglossan radula, used to prey on vertebrates and invertebrate animals (mostly polychaete worms) or in self-defense. Some turrids have lost the radula and the poison gland. The radula, when present, has two or three teeth in a row. It lacks lateral teeth and the marginal teeth are of the wishbone or duplex type. The teeth with a duplex form are not shaped from two distinct elements but grow from a flat plate, by thickening at the edges of the teeth and elevation of the rear edge from the membrane.
Female turrids lay their eggs in lens-shaped capsules.
History of the taxonomy
The turrids were perceived as one of the most difficult groups to study because of a large number of supra-specific described taxa, which are complicated by their species diversity. Although some species of turrids are relatively common, many are rare, some being known only from single specimens; this is another factor that makes studying the group difficult.
2011 taxonomy
The previous (2005) classification system for the group was thoroughly changed by the publication in 2011 of the article Bouchet P., Kantor Yu.I., Sysoev A. & Puillandre N. (2011) A new operational classification of the Conoidea. Journal of Molluscan Studies 77: 273-308. The authors presented a new classification of the Conoidea on the genus level, based on anatomical characters but also on the molecular phylogeny as presented by Puillandre N., et al., 2008. The polyphyletic family Turridae was resolved into 13 monophyletic families (containing 358 currently recognized genera and subgenera) :
Conorbidae
Borsoniidae
Clathurellidae
Mitromorphidae
Mangeliidae
Raphitomidae
Cochlespiridae
Drilliidae
Pseudomelatomidae (= Crassispiridae)
Clavatulidae
Horaiclavidae
Turridae s.s.
Strictispiridae
2005 taxonomy
According to the taxonomy of the Gastropoda by Bouchet & Rocroi, 2005, which attempted to set out a stable taxonomy, this group consisted of the following five subfamilies:
Turrinae H. Adams & A. Adams, 1853 (1838) - synonyms: Pleurotominae Gray, 1838; Lophiotominae Morrison, 1965 (n.a.)
Cochlespirinae Powell, 1942
Crassispirinae McLean, 1971 - synonym: Belinae A. Bellardi, 1875
Zemaciinae Sysoev, 2003
Zonulispirinae McLean, 1971
Genera
Genera in the family Turridae used to include:
Abyssocomitas Sysoev & Kantor, 1986
Acamptogenotia Rovereto, 1899
Aforia Dall, 1889
Anacithara Hedley, 1922
Ancistrosyrinx Dall, 1881
Anticomitas Powell, 1942
Antiguraleus Powell, 1942
Antimelatoma Powell, 1942
Antiplanes Dall, 1902
Aoteadrillia Powell, 1942
Apiotoma Cossmann, 1889
Asperdaphne Hedley, 1922
Austrodrillia Hedley, 1918
Bathybela Kobelt, 1905
Belalora Powell, 1951
Benthoclionella Kilburn, 1974
Buchema Corea, 1934
Burchia Bartsch, 1944
Calcatodrillia Kilburn, 1988
Carinapex Dall, 1924
Carinodrillia Dall, 1919
Carinoturris Bartsch, 1944
Ceritoturris Dall, 1924
Cheungbeia Taylor & Wells, 1994
Clavatula Lamarck, 1801
Clavosurcula Schepman, 1913
Clavus Montfort, 1810
Clionella Gray, 1847
Cochlespira Conrad, 1865
Compsodrillia Woodring, 1928
Conorbela Powell, 1951
Conticosta Laseron, 1954
Crassiclava McLean, 1971
Crassispira Swainson, 1840
Cretaspira Kuroda & Oyama, 1971
Cryptogemma Dall, 1918
Cymakra Gardner, 1937
Danilacarina Bozzetti, 1997
Daphnella Hinds, 1844
Darrylia Garcia, 2008
Decollidrillia Habe & Ito, 1965
Doxospira McLean, 1971
Eosurcula
Epideira Hedley, 1918
Epidirona Iredale, 1931
Fenimorea Bartsch, 1934
Fusiturricula Woodring, 1928
Fusiturris Thiele, 1929
Gemmula Weinkauff, 1875
Graciliclava Shuto, 1983
Haedropleura Monterosato in Bucquoy, Dautzenberg & Dollfus, 1883
Hauturua Powell, 1942
Hemilienardia Boettger, 1895
Heterocithara Hedley, 1922
Hindsiclava Hertlein and Strong, 1955
Horaiclavus Oyama, 1954
Inodrillia Bartsch, 1943
Inquisitor Hedley, 1918
Iotyrris Medinskaya & Sysoev, 2001
Iredalea Oliver, 1915
Irenosyrinx Dall, 1908
Iwaoa Kuroda, 1953
Knefastia Dall, 1919
Kurilohadalia Sysoev & Kantor, 1986
Kurodadrillia Azuma, 1975
Kuroshioturris Shuto, 1961
Leucosyrinx Dall, 1889 :p
Lienardia Jousseaume 1884
Lioglyphostoma Woodring, 1928
Lophiotoma Casey, 1904
Lophioturris Powell, 1964
Liracraea Odhner, 1924
Lora Gistl, 1848
Lucerapex Iredale, 1936
Lusitanops F. Nordsieck, 1968
Maesiella McLean, 1971
Makiyamaia Kuroda, 1961
Marshallena Finlay, 1926
Mauidrillia Powell, 1942
Megasurcula Casey, 1904
Microdrillia Casey, 1903
Micropleurotoma Thiele, 1929
Miraclathurella Woodring, 1928
Monilispira Bartsch & Rehder, 1939
Naskia Sysoev & Ivanov, 1985
Neodrillia Bartsch, 1943
Neoguraleus Powell, 1939
Neopleurotomoides Shuto, 1925
Nihonia McNeil, 1961
Nodotoma Bartsch, 1941
Nquma Kilburn, 1988
Oenopota Moerch, 1852
Paradrillia Makiyama, 1940
Perrona Schumacher, 1817
Philbertia Monterosato, 1884
Phymorhynchus Dall, 1908
Pilsbryspira Bartsch, 1950
Pinguigemmula McNeil, 1961
Plicisyrinx Sysoev & Kantor, 1986
Polystira Woodring, 1928
Pseudexomilus Powell, 1944
Pseudotaranis McLean, 1995
Psittacodrillia Kilburn, 1988
Ptychobela Thiele, 1925
Ptychosyrinx Thiele, 1925
Pusionella Gray, 1847
Pyrgospira McLean, 1971
Rectiplanes Bartsch, 1944
Rhodopetoma Bartsch, 1944
Riuguhdrillia Oyama, 1951
Scaevatula Gofas, 1990
Shutonia van der Bijl, 1993
Sinistrella Meyer, 1887
Spirotropis Sars, 1878
Splendrillia Dell, 1956
Steiraxis Dall, 1896
Stenodrillia Korobkov, 1955
Striatoguraleus Kilburn, 1994
Surcula H. Adams & A. Adams, 1853
Teretia Norman, 1888
Tomella Swainson, 1840
Toxiclionella Powell, 1966
Turricula Schumacher, 1817
Turridrupa Hedley, 1922
Turris Röding, 1798 - type genus
Unedogemmula MacNeil, 1961
Veprecula Melvill, 1917
Vexitomina Powell, 1942
Viridoturris Powell, 1964
Viridrillia Bartsch, 1943
Vitricythara Fargo, 1953
Zemacies Finlay, 1926
Zonulispira Bartsch, 1950
| Biology and health sciences | Gastropods | Animals |
8867758 | https://en.wikipedia.org/wiki/Geology%20of%20Mars | Geology of Mars | The geology of Mars is the scientific study of the surface, crust, and interior of the planet Mars. It emphasizes the composition, structure, history, and physical processes that shape the planet. It is analogous to the field of terrestrial geology. In planetary science, the term geology is used in its broadest sense to mean the study of the solid parts of planets and moons. The term incorporates aspects of geophysics, geochemistry, mineralogy, geodesy, and cartography. A neologism, areology, from the Greek word Arēs (Mars), sometimes appears as a synonym for Mars's geology in the popular media and works of science fiction (e.g. Kim Stanley Robinson's Mars trilogy). The term areology is also used by the Areological Society.
Geological map of Mars (2014)
Global Martian topography and large-scale features
Composition of Mars
Mars is a terrestrial planet, which has undergone the process of planetary differentiation.
The InSight lander mission is designed to study the deep interior of Mars. The mission landed on 26 November 2018. and deployed a sensitive seismometer to enable 3D structure mapping of the deep interior. On 25 October 2023, scientists, helped by information from InSight, reported that the planet Mars has a radioactive magma ocean under its crust.
Global physiography
Mars has a number of distinct, large-scale surface features that indicate the types of geological processes that have operated on the planet over time. This section introduces several of the larger physiographic regions of Mars. Together, these regions illustrate how geologic processes involving volcanism, tectonism, water, ice, and impacts have shaped the planet on a global scale.
Hemispheric dichotomy
The northern and southern hemispheres of Mars are strikingly different from each other in topography and physiography. This dichotomy is a fundamental global geologic feature of the planet. The northern part is an enormous topographic depression. About one-third of the surface (mostly in the northern hemisphere) lies 3–6 km lower in elevation than the southern two-thirds. This is a first-order relief feature on par with the elevation difference between Earth's continents and ocean basins. The dichotomy is also expressed in two other ways: as a difference in impact crater density and crustal thickness between the two hemispheres. The hemisphere south of the dichotomy boundary (often called the southern highlands or uplands) is very heavily cratered and ancient, characterized by rugged surfaces that date back to the period of heavy bombardment. In contrast, the lowlands north of the dichotomy boundary have few large craters, are very smooth and flat, and have other features indicating that extensive resurfacing has occurred since the southern highlands formed. The third distinction between the two hemispheres is in crustal thickness. Topographic and geophysical gravity data indicate that the crust in the southern highlands has a maximum thickness of about , whereas the crust in the northern lowlands "peaks" at around in thickness. The location of the dichotomy boundary varies in latitude across Mars and depends on which of the three physical expressions of the dichotomy is being considered.
The origin and age of the hemispheric dichotomy are still debated. Hypotheses of origin generally fall into two categories: one, the dichotomy was produced by a mega-impact event or several large impacts early in the planet's history (exogenic theories) or two, the dichotomy was produced by crustal thinning in the northern hemisphere by mantle convection, overturning, or other chemical and thermal processes in the planet's interior (endogenic theories). One endogenic model proposes an early episode of plate tectonics producing a thinner crust in the north, similar to what is occurring at spreading plate boundaries on Earth. Whatever its origin, the Martian dichotomy appears to be extremely old. A new theory based on the Southern Polar Giant Impact and validated by the discovery of twelve hemispherical alignments shows that exogenic theories appear to be stronger than endogenic theories and that Mars never had plate tectonics that could modify the dichotomy. Laser altimeters and radar-sounding data from orbiting spacecraft have identified a large number of basin-sized structures previously hidden in visual images. Called quasi-circular depressions (QCDs), these features likely represent derelict impact craters from the period of heavy bombardment that are now covered by a veneer of younger deposits. Crater counting studies of QCDs suggest that the underlying surface in the northern hemisphere is at least as old as the oldest exposed crust in the southern highlands. The ancient age of the dichotomy places a significant constraint on theories of its origin.
Tharsis and Elysium volcanic provinces
Straddling the dichotomy boundary in Mars's western hemisphere is a massive volcano-tectonic province known as the Tharsis region or the Tharsis bulge. This immense, elevated structure is thousands of kilometers in diameter and covers up to 25% of the planet's surface. Averaging 7–10 km above datum (Martian "sea" level), Tharsis contains the highest elevations on the planet and the largest known volcanoes in the Solar System. Three enormous volcanoes, Ascraeus Mons, Pavonis Mons, and Arsia Mons (collectively known as the Tharsis Montes), sit aligned NE-SW along the crest of the bulge. The vast Alba Mons (formerly Alba Patera) occupies the northern part of the region. The huge shield volcano Olympus Mons lies off the main bulge, at the western edge of the province. The extreme massiveness of Tharsis has placed tremendous stress on the planet's lithosphere. As a result, immense extensional fractures (grabens and rift valleys) radiate outward from Tharsis, extending halfway around the planet.
A smaller volcanic center lies several thousand kilometers west of Tharsis in Elysium. The Elysium volcanic complex is about 2,000 kilometers in diameter and consists of three main volcanoes, Elysium Mons, Hecates Tholus, and Albor Tholus. The Elysium group of volcanoes is thought to be somewhat different from the Tharsis Montes, in that development of the former involved both lavas and pyroclastics.
Large impact basins
Several enormous, circular impact basins are present on Mars. The largest one that is readily visible is the Hellas basin located in the southern hemisphere. It is the second largest confirmed impact structure on the planet, centered at about 64°E longitude and 40°S latitude. The central part of the basin (Hellas Planitia) is 1,800 km in diameter and surrounded by a broad, heavily eroded annular rim structure characterized by closely spaced rugged irregular mountains (massifs), which probably represent uplifted, jostled blocks of old pre-basin crust. (See Anseris Mons, for example.) Ancient, low-relief volcanic constructs (highland paterae) are located on the northeastern and southwestern parts of the rim. The basin floor contains thick, structurally complex sedimentary deposits that have a long geologic history of deposition, erosion, and internal deformation. The lowest elevations on the planet are located within the Hellas basin, with some areas of the basin floor lying over 8 km below datum.
The two other large impact structures on the planet are the Argyre and Isidis basins. Like Hellas, Argyre (800 km in diameter) is located in the southern highlands and is surrounded by a broad ring of mountains. The mountains in the southern portion of the rim, Charitum Montes, may have been eroded by valley glaciers and ice sheets at some point in Mars's history. The Isidis basin (roughly 1,000 km in diameter) lies on the dichotomy boundary at about 87°E longitude. The northeastern part of the basin rim has been eroded and is now buried by northern plains deposits, giving the basin a semicircular outline. The northwestern rim of the basin is characterized by arcuate grabens (Nili Fossae) that are circumferential to the basin. One additional large basin, Utopia, is completely buried by northern plains deposits. Its outline is clearly discernable only from altimetry data. All of the large basins on Mars are extremely old, dating to the late heavy bombardment. They are thought to be comparable in age to the Imbrium and Orientale basins on the Moon.
Equatorial canyon system
Near the equator in the western hemisphere lies an immense system of deep, interconnected canyons and troughs collectively known as the Valles Marineris. The canyon system extends eastward from Tharsis for a length of over 4,000 km, nearly a quarter of the planet's circumference. If placed on Earth, Valles Marineris would span the width of North America. In places, the canyons are up to 300 km wide and 10 km deep. Often compared to Earth's Grand Canyon, the Valles Marineris has a very different origin than its tinier, so-called counterpart on Earth. The Grand Canyon is largely a product of water erosion. The Martian equatorial canyons were of tectonic origin, i.e. they were formed mostly by faulting. They could be similar to the East African Rift valleys. The canyons represent the surface expression of a powerful extensional strain in the Martian crust, probably due to loading from the Tharsis bulge.
Chaotic terrain and outflow channels
The terrain at the eastern end of the Valles Marineris grades into dense jumbles of low rounded hills that seem to have formed by the collapse of upland surfaces to form broad, rubble-filled hollows. Called chaotic terrain, these areas mark the heads of huge outflow channels that emerge full size from the chaotic terrain and empty (debouch) northward into Chryse Planitia. The presence of streamlined islands and other geomorphic features indicate that the channels were most likely formed by catastrophic releases of water from aquifers or the melting of subsurface ice. However, these features could also be formed by abundant volcanic lava flows coming from Tharsis. The channels, which include Ares, Shalbatana, Simud, and Tiu Valles, are enormous by terrestrial standards, and the flows that formed them correspondingly immense. For example, the peak discharge required to carve the 28-km-wide Ares Vallis is estimated to have been 14 million cubic metres (500 million cu ft) per second, over ten thousand times the average discharge of the Mississippi River.
Ice caps
The polar ice caps are well-known telescopic features of Mars, first identified by Christiaan Huygens in 1672. Since the 1960s, we have known that the seasonal caps (those seen in the telescope to grow and wane seasonally) are composed of carbon dioxide (CO2) ice that condenses out of the atmosphere as temperatures fall to 148 K, the frost point of CO2, during the polar wintertime. In the north, the CO2 ice completely dissipates (sublimes) in summer, leaving behind a residual cap of water (H2O) ice. At the south pole, a small residual cap of CO2 ice remains in summer.
Both residual ice caps overlie thick layered deposits of interbedded ice and dust. In the north, the layered deposits form a 3 km-high, 1,000 km-diameter plateau called Planum Boreum. A similar kilometers-thick plateau, Planum Australe, lies in the south. Both plana (the Latin plural of planum) are sometimes treated as synonymous with the polar ice caps, but the permanent ice (seen as the high albedo, white surfaces in images) forms only a relatively thin mantle on top of the layered deposits. The layered deposits probably represent alternating cycles of dust and ice deposition caused by climate changes related to variations in the planet's orbital parameters over time (see also Milankovitch cycles). The polar layered deposits are some of the youngest geologic units on Mars.
Geological history
Albedo features
No topography is visible on Mars from Earth. The bright areas and dark markings seen through a telescope are albedo features. The bright, red-ochre areas are locations where fine dust covers the surface. Bright areas (excluding the polar caps and clouds) include Hellas, Tharsis, and Arabia Terra. The dark gray markings represent areas that the wind has swept clean of dust, leaving behind the lower layer of dark, rocky material. Dark markings are most distinct in a broad belt from 0° to 40° S latitude. However, the most prominent dark marking, Syrtis Major Planum, is in the northern hemisphere. The classical albedo feature, Mare Acidalium (Acidalia Planitia), is another prominent dark area in the northern hemisphere. A third type of area, intermediate in color and albedo, is also present and thought to represent regions containing a mixture of the material from the bright and dark areas.
Impact craters
Impact craters were first identified on Mars by the Mariner 4 spacecraft in 1965. Early observations showed that Martian craters were generally shallower and smoother than lunar craters, indicating that Mars has a more active history of erosion and deposition than the Moon.
In other aspects, Martian craters resemble lunar craters. Both are products of hypervelocity impacts and show a progression of morphology types with increasing size. Martian craters below about 7 km in diameter are called simple craters; they are bowl-shaped with sharp raised rims and have depth/diameter ratios of about 1/5. Martian craters change from simple to more complex types at diameters of roughly 5 to 8 km. Complex craters have central peaks (or peak complexes), relatively flat floors, and terracing or slumping along the inner walls. Complex craters are shallower than simple craters in proportion to their widths, with depth/diameter ratios ranging from 1/5 at the simple-to-complex transition diameter (~7 km) to about 1/30 for a 100-km diameter crater. Another transition occurs at crater diameters of around 130 km as central peaks turn into concentric rings of hills to form multi-ring basins.
Mars has the greatest diversity of impact crater types of any planet in the Solar System. This is partly because the presence of both rocky and volatile-rich layers in the subsurface produces a range of morphologies even among craters within the same size classes. Mars also has an atmosphere that plays a role in ejecta emplacement and subsequent erosion. Moreover, Mars has a rate of volcanic and tectonic activity low enough that ancient, eroded craters are still preserved, yet high enough to have resurfaced large areas, producing a diverse range of crater populations of widely differing ages. Over 42,000 impact craters greater than 5 km in diameter have been catalogued on Mars, and the number of smaller craters is probably innumerable. The density of craters on Mars is highest in the southern hemisphere, south of the dichotomy boundary. This is where most of the large craters and basins are located.
Crater morphology provides information about the physical structure and composition of the surface and subsurface at the time of impact. For example, the size of central peaks in Martian craters is larger than comparable craters on Mercury or the Moon. In addition, the central peaks of many large craters on Mars have pit craters at their summits. Central pit craters are rare on the Moon but are very common on Mars and the icy satellites of the outer Solar System. Large central peaks and the abundance of pit craters probably indicate the presence of near-surface ice at the time of impact. Polewards of 30 degrees of latitude, the form of older impact craters is rounded out ("softened") by acceleration of soil creep by ground ice.
The most notable difference between Martian craters and other craters in the Solar System is the presence of lobate (fluidized) ejecta blankets. Many craters at equatorial and mid-latitudes on Mars have this form of ejecta morphology, which is thought to arise when the impacting object melts ice in the subsurface. Liquid water in the ejected material forms a muddy slurry that flows along the surface, producing the characteristic lobe shapes. The crater Yuty is a good example of a rampart crater, which is so called because of the rampart-like edge to its ejecta blanket.
Martian craters are commonly classified by their ejecta. Craters with one ejecta layer are called single-layer ejecta (SLE) craters. Craters with two superposed ejecta blankets are called double-layer ejecta (DLE) craters, and craters with more than two ejecta layers are called multiple-layered ejecta (MLE) craters. These morphological differences are thought to reflect compositional differences (i.e. interlayered ice, rock, or water) in the subsurface at the time of impact.
Martian craters show a large diversity of preservational states, from extremely fresh to old and eroded. Degraded and infilled impact craters record variations in volcanic, fluvial, and eolian activity over geologic time. Pedestal craters are craters with their ejecta sitting above the surrounding terrain to form raised platforms. They occur because the crater's ejecta forms a resistant layer so that the area nearest the crater erodes more slowly than the rest of the region. Some pedestals were hundreds of meters above the surrounding area, meaning that hundreds of meters of material were eroded. Pedestal craters were first observed during the Mariner 9 mission in 1972.
Volcanism
Volcanic structures and landforms cover large parts of the Martian surface. The most conspicuous volcanoes on Mars are located in Tharsis and Elysium. Geologists think one of the reasons volcanoes on Mars were able to grow so large is that Mars has fewer tectonic boundaries in comparison to Earth. Lava from a stationary hot spot was able to accumulate at one location on the surface for many hundreds of millions of years.
Scientists have never recorded an active volcano eruption on the surface of Mars. Searches for thermal signatures and surface changes within the last decade have not yielded evidence for active volcanism.
On October 17, 2012, the Curiosity rover on the planet Mars at "Rocknest" performed the first X-ray diffraction analysis of Martian soil. The results from the rover's CheMin analyzer revealed the presence of several minerals, including feldspar, pyroxenes and olivine, and suggested that the Martian soil in the sample was similar to the "weathered basaltic soils" of Hawaiian volcanoes. In July 2015, the same rover identified tridymite in a rock sample from Gale Crater, leading scientists to conclude that silicic volcanism might have played a much more prevalent role in the planet's volcanic history than previously thought.
Sedimentology
Flowing water appears to have been common on the surface of Mars at various points in its history, and especially on ancient Mars. Many of these flows carved the surface, forming valley networks and producing sediment. This sediment has been redeposited in a wide variety of wet environments, including in alluvial fans, meandering channels, deltas, lakes, and perhaps even oceans. The processes of deposition and transportation are associated with gravity. Due to gravity, related differences in water fluxes and flow speeds, inferred from grain size distributions, Martian landscapes were created by different environmental conditions. Nevertheless, there are other ways of estimating the amount of water on ancient Mars (see: Water on Mars). Groundwater has been implicated in the cementation of aeolian sediments and the formation and transport of a wide variety of sedimentary minerals including clays, sulphates and hematite.
When the surface has been dry, wind has been a major geomorphic agent. Wind driven sand bodies like megaripples and dunes are extremely common on the modern Martian surface, and Opportunity has documented abundant aeolian sandstones on its traverse. Ventifacts, like Jake Matijevic (rock), are another aeolian landform on the Martian Surface.
A wide variety of other sedimentological facies are also present locally on Mars, including glacial deposits, hot springs, dry mass movement deposits (especially landslides), and cryogenic and periglacial material, amongst many others. Evidence for ancient rivers, a lake, and dune fields have all been observed in the preserved strata by rovers at Meridiani Planum and Gale crater.
Common surface features
Groundwater on Mars
One group of researchers proposed that some of the layers on Mars were caused by groundwater rising to the surface in many places, especially inside of craters. According to the theory, groundwater with dissolved minerals came to the surface, in and later around craters, and helped to form layers by adding minerals (especially sulfate) and cementing sediments. This hypothesis is supported by a groundwater model and by sulfates discovered in a wide area. At first, by examining surface materials with Opportunity Rover, scientists discovered that groundwater had repeatedly risen and deposited sulfates. Later studies with instruments on board the Mars Reconnaissance Orbiter showed that the same kinds of materials existed in a large area that included Arabia.
Interesting geomorphological features
Avalanches
On February 19, 2008, images obtained by the HiRISE camera on the Mars Reconnaissance Orbiter showed a spectacular avalanche, in which debris thought to be fine-grained ice, dust, and large blocks fell from a high cliff. Evidence of the avalanche included dust clouds rising from the cliff afterwards. Such geological events are theorized to be the cause of geologic patterns known as slope streaks.
Possible caves
NASA scientists studying pictures from the Odyssey spacecraft have spotted what might be seven caves on the flanks of the Arsia Mons volcano on Mars. The pit entrances measure from wide and they are thought to be at least deep. See image below: the pits have been informally named (A) Dena, (B) Chloe, (C) Wendy, (D) Annie, (E) Abby (left) and Nikki, and (F) Jeanne. Dena's floor was observed and found to be 130 m deep. Further investigation suggested that these were not necessarily lava tube "skylights". Review of the images has resulted in yet more discoveries of deep pits. Recently, a global database (MG) of over 1,000 Martian cave candidates at Tharsis Montes has been developed by the USGS Astrogeology Science Center. In 2021, scientists are applying machine-learning algorithms to extend the MG database across the entire surface of Mars.
It has been suggested that human explorers on Mars could use lava tubes as shelters. The caves may be the only natural structures offering protection from the micrometeoroids, UV radiation, solar flares, and high energy particles that bombard the planet's surface. These features may enhance preservation of biosignatures over long periods of time and make caves an attractive astrobiology target in the search for evidence of life beyond Earth.
Inverted relief
Some areas of Mars show inverted relief, where features that were once depressions, like streams, are now above the surface. It is believed that materials like large rocks were deposited in low-lying areas. Later, wind erosion removed much of the surface layers, but left behind the more resistant deposits. Other ways of making inverted relief might be lava flowing down a stream bed or materials being cemented by minerals dissolved in water. On Earth, materials cemented by silica are highly resistant to all kinds of erosional forces. Examples of inverted channels on Earth are found in the Cedar Mountain Formation near Green River, Utah. Inverted relief in the shape of streams are further evidence of water flowing on the Martian surface in past times. Inverted relief in the form of stream channels suggests that the climate was different—much wetter—when the inverted channels were formed.
In an article published in 2010, a large group of scientists endorsed the idea of searching for life in Miyamoto Crater because of inverted stream channels and minerals that indicated the past presence of water.
Images of examples of inverted relief from various parts of Mars are shown below.
| Physical sciences | Solar System | Astronomy |
16280838 | https://en.wikipedia.org/wiki/Demand-responsive%20transport | Demand-responsive transport | Demand-responsive transport (DRT), also known as demand-responsive transit, demand-responsive service, Dial-a-Ride transit (sometimes DART), flexible transport services, Microtransit, Non-Emergency Medical Transport (NEMT), Carpool or On-demand bus service is a form of shared private or quasi-public transport for groups traveling where vehicles alter their routes each journey based on particular transport demand without using a fixed route or timetabled journeys. These vehicles typically pick-up and drop-off passengers in locations according to passengers needs and can include taxis, buses or other vehicles. Passengers can typically summon the service with a mobile phone app or by telephone; telephone is particularly relevant to older users who may not be conversant with technology.
One of the most widespread types of demand-responsive transport (DRT) is to provide a public transport service in areas of low passenger demand where a regular bus service is not considered to be financially viable, such as rural and peri-urban areas. Services may also be provided for particular types of passengers. One example is the paratransit programs for people with a disability. The provision of public transport in this manner emphasises one of its functions as a social service rather than creating a viable movement network.
Definition
DRT can be used to refer to many different types of transport. When taxicabs were first introduced to many cities, they were hailed as an innovative form of DRT. They are still referred to as DRT in some jurisdictions around the world as their very nature is to take people from point-to-point based on their needs.
More recently, DRT generally refers to a type of public transport. They are distinct from fixed-route services as they do not always operate to a specific timetable or route. While specific operations vary widely, generally a particular area is designated for service by DRT. Once a certain number of people have requested a trip, the most efficient route will then be calculated depending on the origins and destinations of passengers.
Share taxis are another form of DRT. They are usually operated on an ad hoc basis but also do not have fixed routes or times and change their route and frequency depending on demand.
Some DRT systems operate as a service that can deviate from a fixed route. These operate along a fixed alignment or path at specific times but may deviate to collect or drop off passengers who have requested the deviation.
Comparison of demand-responsiveness by type
Fully flexible route, fully flexible schedule, no booking – personal vehicle, foot
Shared vehicle
Fully flexible route, fully flexible schedule, booking – minicab
Fully flexible route, fully flexible schedule, no booking – taxi
Shared journey
highly flexible route, highly flexible schedule, mobile booking – microtransit
some degree of flexible route or schedule, no booking – share taxi/taxibus
some degree of flexible route or schedule, booking – paratransit
fixed route and fixed schedule, no booking – bus
fixed route and fixed schedule, booking – coach, aeroplane
Operation
DRT services are restricted to a defined operating zone, within which journeys must start and finish. Journeys may be completely free form, or following skeleton routes and schedules, varied as required, with users given a specified pick-up point and a time window for collection. Some DRT systems may have defined termini, at one or both ends of a route, such as an urban centre, airport or transport interchange, for onward connections.
DRT systems require passengers to request a journey in advance. They may do this by booking with a central dispatcher who determines the journey options available given the user's location and destination. Increasingly, the booking is via an app, which provides the interface to software that creates a schedule in real time; adjusting the schedule to accept (or reject) bookings as they come in. This provides an instant decision for the potential user, but at the cost of efficiency: each individual travel need is considered individually, potentially resulting in higher levels of idle time (when the schedule has gaps that are too short to allow an additional journey to be added) and "dead mileage" (driving empty between one drop-off and the next pickup) than might be expected from a schedule built by an experienced human operator.
DRT systems take advantage of fleet telematics technology in the form of vehicle location systems, scheduling and dispatching software and hand-held/in vehicle computing.
Vehicles used for DRT services are typically small minibuses sufficient for low ridership, which allow the service to provide as near a door-to-door service as practical by using narrower residential streets. In some cases taxicabs are hired by the DRT provider to serve their routes on request.
DRT schemes may be fully or partially funded by the local transit authority, with operators selected by public tendering or other methods. Other schemes may be partially or fully self-funded as community centred not for profit social enterprises (such as a community interest company in the UK). They may also be provided by private companies for commercial reasons; some conventional bus operating companies have set up DRT-style airport bus services, which compete with larger private hire airport shuttle companies.
Health and environmental effects
DRT can potentially reduce the number of vehicles on the road, and hence pollution and congestion, if many people are persuaded to use it instead of private cars or taxis.
For a model of a hypothetical large-scale demand-responsive public transport system for the Helsinki metropolitan area, simulation results published in 2005 demonstrated that "in an urban area with one million inhabitants, trip aggregation could reduce the health, environmental, and other detrimental impacts of car traffic typically by 50–70%, and if implemented could attract about half of the car passengers, and within a broad operational range would require no public subsidies".
Licensing
DRT schemes may require new or amended legislation, or special dispensation, to operate, as they do not meet the traditional licensing model of authorised bus transport providers or licensed taxicab operators. The status has caused controversy between bus and taxi operators when the DRT service picks up passengers without pre-booking, due to the licensing issues. Issues may also arise surrounding tax and fuel subsidy for DRT services.
Effectiveness
Ridership on DRT services is usually quite low (less than ten passengers per hour), but DRT can provide coverage effectively.
Analysis of the Yorbus DRT scheme in a rural area of the UK showed very little combination of individual travel needs. Of the 35% of operating hours when the vehicles were carrying passengers, there was just one passenger (or a couple travelling together) for 74% of the time, and two passengers (or couples travelling together) for a further 20% of the time. The 15-seat minibuses could have been replaced by small taxis without capacity problems for 97% of the operating hours.
List of current DRT systems by country
Since the mid-2010s several DRT projects started up but failed.
In the US several DRT operators appeared and promptly failed, due to either lack of customers or health and safety issues. 2019 trials in London found that "satisfaction was really high"; users scored the service at 4.8/5 and praised ease of use, safety, cleanliness and accessibility. But low take-up, misunderstandings about who the service was for, and safety concerns about unlit stops—together with problems due to the covid pandemic from 2020—caused the trials to fail.
Lukas Foljanty, a shared-mobility enthusiast and market expert, keeps track of the different DRT schemes around the world and thinks a tipping point may have been reached in 2022. There were at least 450 schemes around the world, and in 2021 fifty-four new projects started within a three-month period.
David Carnero of Europe-wide DRT technology company Padam said that successful DRT requires subsidies, must be delivered at scale, and must be part of an integrated, rather than competitive, transport policy.
Australia
CoastConnect, first-mile / last-mile demand-responsive transport service in Woy Woy, New South Wales, operated by Community Transport Central Coast Limited and Liftango
Kan-go, demand-responsive transport service in Hervey Bay, Queensland and Toowoomba, Queensland
SmartLink, demand-responsive transport service in Blue Mountains.
Skybus hotel transfer service in Melbourne, Victoria.
Telebus in Melbourne, Victoria providing demand-responsive bus services to some outer suburbs of the metropolitan area since the 1970s.
Flexiride in Melbourne, Victoria replaced Telebus services in 2021
Austria
RufbusLinie 326 Leopoldschlag – Summerau – Freistadt
W3 Shuttle
Belgium
Flexbus (before 2024 Belbus) — has been working since 1991 in the Flemish Region
Canada
Belleville, Ontario – BT Let's Go, operated by Belleville Transit, replaces fixed route night bus services with an on-demand transit service. This provides stop-to-stop scheduled pick-ups and drop-offs requested by riders through a web-based application. Buses are dynamically routed to riders in real-time by an autonomous algorithm.
Cobourg, Ontario – operated by Cobourg Transit, it plans to be a complete replacement of fixed route bus transit service, and will require residents to book a stop in advance. It is undergoing a pilot right now, and is scheduled to be fully implemented with the town's WHEELS transit service and replace fixed route transit on June 14, 2021.
Edmonton, Alberta – Edmonton Transit Service offers On Demand Transit in designated areas not served by scheduled routes.
Guelph, Ontario – Works in addition to fixed route service.
Niagara Falls, Ontario – TransCab Service, operated by Niagara Falls Transit, provides service to the Montrose Junction section of the city during the daytime and early evening.
Toronto, Ontario — Wheel-Trans
Winnipeg, Manitoba – WT On-Request, operated by Winnipeg Transit, replaces regular fixed transit route service in three neighbourhoods during low-use hours and provides door-to-door transit service in one inner-city neighbourhood during daytime hours.
Czech Republic
CITYA company uses a mobile application for booking and planning routes
DHD – non-public DRT operated since 2003. Its primary purpose was for collecting workers from sparsely-populated rural areas. DHD provides bookings and administrative support; however, the buses themselves are operated by several local transport companies.
Radiobus – has operated across the country between 2004 and 2018. Since 2011, it has been part of the general public transport system to supplement the existing system during times of low demand. It uses fixed timetables, but vehicles only operate when called by passenger.
Denmark
All 5 major Public Transit Authorities in Denmark provide door-to-door DRT services in different variants and degree.
Flextur is public transport, but with smaller vehicles that run on demand. Plustur is flexible transport that can be used in cases where the bus or train does not run all the way. Flexrute is public transport on demand - without a fixed timetable, which drives from stop to stop within a defined geographical area. There are also special needs school transport and paratransit services.
The DRT-services in Denmark are maintained as a collaboration between the PTAs in a joint venture, FlexDanmark, thus providing nationwide DRT-services (excluding some islands). There are three major operational areas:
Movia Trafik covers the eastern part of Denmark, including the Copenhagen metropolitan area as well as the rest of the island of Zealand, Falster and Møn.
MidtTrafik, Sydtrafik and FynBus covers middle and southern Denmark, including the island of Funen.
Nordjyllands Trafikselskab covers the northern part of Denmark.
Finland
There is paratransit service (palvelulinja, palveluliikenne) in many cities and municipalities in Finland. It is mainly aimed at those who find it difficult to use other public transport, but often anyone who wants to can order a trip.
Jakobstad – Vippari
Porvoo – Kyläkyyti
Raahe - Raahe-kyyti
Riihimäki – R-kyyti
Germany
Berlin – Allygator Shuttle, Clevershuttle, BerlKönig
Braunschweig, Lower Saxony – Anruflinien-Taxi (ALT) and Anruflinien-Bus (ALB)
Cologne – AnrufLinienFahrt (ALF), an on-demand minibus service that operates in predominantly rural areas of the city.
Dresden – Anruf-Linien-Bus Verkehrsgesellschaft Meißen
Duisburg – myBUS
Elbe-Elster – Anruf-Linienbus, a DRT bus service operated by the regional public transport authority in Herzberg, Sonnewalde, Umland and Finsterwalde
Freyung, Bavaria – FreyFahrt
Hamburg – MOIA ()
Munich – IsarTiger
Rostock – REBUS = Regional Bus Rostock
Districts of Tirschenreuth, Neustadt an der Waldnaab and Schwandorf – Baxi, a mix between taxis and buses taking passengers from stops to any destination within the districts.
Hong Kong
Red minibuses which serve non-franchised routes across the country, depending on routes, allow passengers to reserve their seats by phone such that operators and drivers are able to know where passengers are and how many there are in deploying their vehicles.
Iceland
Public transport authority in the Icelandic capital of Reykjavik and the surrounding municipalities. Manages public bus transport and disabled transport, but does not have its own vehicles. About 1,300 enquiries and thousand trips a day. Uses 60 vehicles and 10–20 more for school transport for children with special needs.
Ireland
A network of over 1,000 demand responsive transport routes are provided across rural Ireland under the TFI Local Link brand. Many of these routes are once a week services which operate a door-to-door pickup from a rural area into a nearby large town, where people can access shopping and other services, followed by a return service a few hours later with a door-to-door drop off back to the same rural area. Other routes include daily return services to/from colleges or employment centres, weekend evening services to/from a night-time activity centre, weekly services to attend Mass, feeder services to connect with scheduled bus and train services, and services on off-shore islands to connect with ferry departures and arrivals.
Services are managed by 15 regional TFI Local Link offices across the country on behalf of the National Transport Authority (NTA), and usually require prebooking by phoning the relevant office in advance. As of June 2023, there are no real-time app-based demand responsive transport services operating in Ireland, but in April 2023 the NTA informed suppliers that they intended "to procure a trial of and, if successful a roll out of, Smart Demand Responsive Transport services (SDRT), using app based products to secure services and routing algorithms to match vehicles with capacity to users".
Italy
Following some pioneering DRT schemes implemented in the 1980s, a second wave of systems were launched from the mid-1990s. There are now DRT schemes in urban and peri-urban areas as well as in rural communities. Operated by both public transport companies and private service providers, the DRT schemes are offered either as intermediate collective transport services for generic users or as schemes for specific user groups. DRT schemes operate in major cities including Rome, Milan, Genoa, Florence, and in several mid- to small-size towns including Alessandria, Aosta, Cremona, Livorno, Mantova, Parma, Empoli, Siena, and Sarzana.
AllôBus and AllôNuit, demand-responsive transport service in Aosta
DrinBus, demand-responsive transport service in Genoa
PersonalBus, demand-responsive transport service in Florence
ProntoBus, demand-responsive transport service in Livorno and Sarzana
EccoBus, demand-responsive transport service in Alessandria
StradiBus, demand-responsive transport service in Cremona
Radiobus, demand-responsive transport service in Milan
Japan
More than 200 of the 1,700 local governments in Japan have introduced DRT public transport services.
Luxembourg
Flexibus – several Flexibus services operate in different parts of the country. The system operates on the basis of passengers calling a central point from which optimal routes for the vehicles are calculated.
Kussbus – private door-to-door bus service primarily for commuter purposes.
Malaysia
Kummute
Mobi
Rapid DRT
Trek Rides, a demand-responsive 'transit' service operated by Asia Mobiliti Technologies Sdn Bhd designed to be first and last mile transportation to rail networks and is currently operating only in Kuala Lumpur and Selangor.
Netherlands
Deeltaxi
Collectief Vraagafhankelijk Vervoer
New Zealand
MyWay in Timaru, a replacement of the usual bus service with demand-responsive transport service.
Poland
The first ever demand-responsive transport scheme in Poland – called Tele-Bus – has been operated since 2007 in Kraków by MPK, the local public transport company (see also Tramways in Kraków).
Russia
"Po puti", or "On the Way", is the first-ever demand-responsive transport scheme in Russia. Launched on October 1, 2021 and operated by Mosgortrans, it serves two zones in NAO and TAO, Moscow (both often referred to as "ТиНАО" in Russian). Zone 1 includes Filimonkovskoye, Sosenskoye, Desyonovskoye and Voskresenskoye Settlements with the Prokshino metro station. Zone 2a, introduced on November 1, 2021, includes Ryazanovskoye Settlement with the Silikatnaya railway station, Line D2. Starting from December 24, 2021, the Shcherbinka railway station, also D2, was added to zone 2a, whereas zone 1 was expanded by adding more blocks of Filimonkovskoye Settlement and southern areas of Desyonovskoye Settlement. Further enlargement is announced.
Sweden
Regional transport authority in Västra Götaland in southwestern Sweden is responsible for all public transport and for transport offers to citizens with special needs. This is an example of DRT used for people with special needs (paratransit).
Switzerland
DRT services have operated in some sparsely populated areas (under 100 p/km2) since 1995. PostBus Switzerland Ltd, the national post company, has operated a DRT service called PubliCar, formerly also Casa Car.
United Kingdom
Some DRT schemes were operating under the UK bus-operating regulations of 1986, allowed by having core start and finish points and a published schedule. Regulations concerning bus service registration and application of bus-operating grants for England and Wales were amended in 2004 to allow registration of fully flexible pre-booked DRT services. Some services, such as LinkUp, only pick up passengers at 'meeting points', but can set down at the passenger's destination.
The Greenwich Association of the Disabled had earlier developed a prototype service, GAD-About, which offered pre-booked door-to-door transport for its members, inspired by similar minibus usage in church and youth clubs. That was then cloned as an easily scalable module, under the aegis of London Transport, to become the Dial-a-Ride service launched as part the general services of Transport for London (TfL), rather than as a bus service.
Examples of UK schemes include:
WESTlink, a service in Bristol, Bath, South Gloucestershire and Somerset operated by the West of England Combined Authority
ArrivaClick (Kent, Watford and Speke)
Connect2Wiltshire (Wiltshire)
Fflecsi, (Wales), DRT services implemented during the COVID-19 pandemic with app provided by ViaVan, and co-ordinated by Transport for Wales.
CallConnect (Lincolnshire)
LinkUp (Tyne & Wear) (Closed 2011)
London Dial-a-Ride
Nippy Bus (Somerset)
Tees Flex (Tees Valley)
Travel Derbyshire on Demand (Derbyshire)
United States
thumb|Dial-a-Ride in New Jersey, 1974
The large majority of 1,500 rural systems in the US provide demand-response service; there are also about 400 urban DRT systems.
California
Demand-Responsive Van Service,
Demand-Response Shuttle, Don Edwards San Francisco Bay National Wildlife Refuge
Demand-Responsive Transit, Redwood National and State Parks
Colorado
Call-n-Ride service, Regional Transportation District, Denver
Florida
As of 2022, at least 30 transit agencies in Florida have demand-response trips.
Flex Service, Votran, New Smyrna Beach
NeighborLink, Lynx, Central Florida
SNAP, UF Transportation and Parking Services, Gainesville
Bayway, Bay County
Georgia
Demand-Responsive transportation, Henry County
Henry Connect Microtransit, McDonough
Illinois
Call-n-Ride, Pace Bus, Chicago metropolitan area
Safe Rides, Champaign-Urbana Mass Transit District, Champaign-Urbana metropolitan area (evening and overnight service only)
Maryland
Ride on Flex, Ride On, Montgomery County
New Jersey
Access Link, New Jersey Transit (statewide)
Go Trenton!, Trenton
New York
Access-A-Ride, Metropolitan Transportation Authority, New York City
Bee-Line Paratransit, Bee-Line Bus System, Westchester County
RTS on Demand, Regional Transit Service, Rochester
Flex Service, Capital District Transportation Authority, Albany
North Carolina
Dial-A-Ride, GWTA, Goldsboro
Flex Service, Greenway Transit, Taylorsville & Burke County. Hybrid of fixed & on-demand.
Night Shuttle, Tar River Transit, Rocky Mount
Qualla Community Resident Transportation, Cherokee Transit, Jackson County. Hybrid of fixed & on-demand.
Rural General Public Service, MTS, Charlotte metropolitan area
Trailblazer Routes, BCMM, Asheville metropolitan area. Hybrid of fixed & on-demand.
Pennsylvania
Flex Connect, Ponono Pony, Monroe County. Only designated stops.
South Carolina
Tel-A-Ride, CARTA, Charleston
Tennessee
Ready!, MATA, Memphis
Groove, Memphis
Texas
GoLink, DART, Dallas area
Trinity Metro On-Demand
STAR Transit, several cities
Utah
UTA On Demand, UTA, Salt Lake County, Davis County, and Tooele County
Virginia
Care and Care Plus, GRTC, Greater Richmond Region
Washington State
Zone Service and Flex Service, Whatcom Transportation Authority, Whatcom County
Dial-a-Ride Skagit Transit, Skagit County
Island Transit GO! Island Transit, Island County
Dial-a-Ride Transit and ZIP, Community Transit, Snohomish County
Metro Access, King County Metro, King County
Interlink Clallam Connect, and Dial-a-Ride, Clallam Transit, Clallam County
Dial-a-Ride, Jefferson Transit (Washington), Jefferson County
Finley Service, Ben Franklin Transit, Tri-Cities, Washington
Columbia County Public Transportation, Columbia County
Washington, D.C.
MetroAccess, WMATA, Washington, D.C.
| Technology | Motorized road transport | null |
16289428 | https://en.wikipedia.org/wiki/Lonesome%20George | Lonesome George | Lonesome George ( or , 1910 – June 24, 2012) was a male Pinta Island tortoise (Chelonoidis niger abingdonii) and the last known individual of the subspecies. In his last years, he was known as the rarest creature in the world. George serves as an important symbol for conservation efforts in the Galápagos Islands and throughout the world.
Discovery
George was first seen on the island of Pinta on November 1, 1971, by Hungarian malacologist József Vágvölgyi. The island's vegetation had been devastated by introduced feral goats, and the indigenous C. n. abingdonii population had been reduced to a single individual. It is thought that he was named after a character played by American actor George Gobel. He was relocated for his own safety to the Charles Darwin Research Station on Santa Cruz Island, where he spent his life under the care of Fausto Llerena, for whom the tortoise breeding center is named.
It was hoped that more Pinta Island tortoises would be found, either on Pinta Island or in one of the world's zoos, similar to the discovery of the Española Island male in San Diego. No other Pinta Island tortoises were found. The Pinta Island tortoise was pronounced functionally extinct, as George was in captivity.
Mating attempts
Over the decades, all attempts at mating Lonesome George had been unsuccessful. This prompted researchers at the Darwin Station to offer a $10,000 reward for a suitable mate.
Until January 2011, George was penned with two females of the species Chelonoidis niger becki (from the Wolf Volcano region of Isabela Island), in the hope his genotype would be retained in any resulting progeny. This species was then thought to be genetically closest to George's; however, any potential offspring would have been hybrids, not purebreds of the Pinta Island species.
In July 2008, George mated with one of his female companions. 13 eggs were collected and placed in incubators. On November 11, 2008, the Charles Darwin Foundation reported 80% of the eggs showed weight loss characteristic of being inviable. By December 2008, the remaining eggs had failed to hatch and X-rays showed that they were inviable.
On July 23, 2009, exactly one year after announcing George had mated, the Galápagos National Park announced one of George's female companions had laid a second clutch of five eggs. The park authority expressed its hope for the second clutch of eggs, which it said were in perfect condition. The eggs were moved to an incubator, but on December 16, it was announced that the incubation period had ended and the eggs were inviable (as was a third batch of six eggs laid by the other female).
In November 1999, scientists reported Lonesome George was "very closely related to tortoises" from Española Island (C. n. hoodensis) and San Cristóbal Island (C. n. chathamensis). On January 20, 2011, two individual C. n. hoodensis female partners were imported to the Charles Darwin Research Station, where George lived.
Death
On June 24, 2012, at 8:00 A.M. local time, Galápagos National Park director Edwin Naula announced that Lonesome George had been found dead by Fausto Llerena, who had looked after him for forty years. Naula suspected that the cause of death was cardiac arrest. A necropsy confirmed that George died from natural causes. The body of Lonesome George was frozen and shipped to the American Museum of Natural History in New York City to be preserved by taxidermists. The preservation work was carried out by the museum's taxidermist George Dante, with input from scientists.
After a short display at the museum, it was expected that Lonesome George's taxidermy would be returned to the Galápagos and displayed at the Galapagos National Park headquarters on Santa Cruz Island for future generations to see. However, a dispute broke out between an Ecuadorean ministry and the Galápagos Islands. The Ecuadorean government wanted the taxidermy to be shown in the capital, Quito, but the Galápagos local mayor said Lonesome George was a symbol of the islands and should return home.
On February 17, 2017, Lonesome George's taxidermy was flown back to the Galápagos Islands, where it is currently on display in the Fausto Llerena Breeding Center.
Biological conservation
In November 2012, in the journal Biological Conservation, researchers reported identifying 17 tortoises that are partially descended from the same species as Lonesome George, leading them to speculate that closely related purebred individuals of that species may still be alive.
In December 2015, it was reported that the discovery of another subspecies, Chelonoidis niger donfaustoi, by Yale researchers had a 90% D.N.A. match to that of the Pinta Island tortoise and that scientists believe this could possibly be used to resurrect the species. This could mean that he is not the last of his kind.
In December 2018, a paper was published by Quesada et al. describing the sequencing of George's genome and some of his aging-related genes. They estimated that the population of C. n. abingdonii had been declining for the past one million years and identified lineage-specific variants affecting DNA repair genes, proteostasis, metabolism regulation and immune response as key processes during the evolution of giant tortoises via effects on longevity and resistance to infection.
In February 2020, the Galápagos National Park, along with the Galápagos Conservancy, reported that a female tortoise was directly related to the species that Lonesome George was a part of. This female was among thirty tortoises that were found to be related to two species that are considered extinct.
| Biology and health sciences | Individual animals | Animals |
7376082 | https://en.wikipedia.org/wiki/Syagrus%20romanzoffiana | Syagrus romanzoffiana | Syagrus romanzoffiana, the queen palm, cocos palm or Jerivá, is a palm native to South America, introduced throughout the world as a popular ornamental garden tree. S. romanzoffiana is a medium-sized palm, quickly reaching maturity at a height of up to tall, with pinnate leaves having as many as 494 pinnae (leaflets), although more typically around 300, each pinna being around in length and in width.
Etymology
Named after Nikolay Rumyantsev (1754–1826), who was Russia's Foreign Minister and Imperial Chancellor and notable patron of the Russian voyages of exploration. He sponsored the first Russian circumnavigation of the globe.
It was previously scientifically known as Cocos plumosa, a name under which it became popular in the horticultural trade in the early 20th century. In some areas of the world the plant is still popularly known as the cocos palm.
Taxonomy
This palm was first scientifically described and validly published as Cocos romanzoffiana in 1822 in Paris in a folio of illustrations made by the artist Louis Choris, with a description by the French-German poet and botanist Adelbert von Chamisso. Both men had participated in the first Russian scientific expedition around the world under command of Otto von Kotzebue, and funded by Nikolay Rumyantsev, during which they collected this plant in the hinterland of Santa Catarina, Brazil in late 1815.
Meanwhile, in England, circa 1825, Loddiges nursery had imported seed of a palm from Brazil which they dubbed Cocos plumosa in their catalogue, a nomen nudum. The horticulturist John Claudius Loudon in 1830 listed this plant among 3 species of the Cocos genus then grown in Britain, and mentioned its possible identification as Karl von Martius' C. comosa. One of Loddiges' seedlings found its way to the new palm stove built at Kew Gardens in the 1840s, where it had grown to a height of 50–60 ft, and where botanists determined it to be another of von Martius' species; C. coronata. In 1859 this palm flowered and produced fruit for the first time, which made it clear that its previous identification was incorrect and thus the director of the garden, Joseph Dalton Hooker, 'reluctantly' published a valid description for Loddiges' name C. plumosa in 1860. C. plumosa became a popular ornamental plant around the world, and plants continued to be sold under this name as of 2000.
From 1887 onwards Odoardo Beccari published a review of the genus Cocos. Under subgenus Arecastrum he listed the taxa C. romanzoffiana of Santa Catarina, C. plumosa known only from cultivation from seedlings from the plant in Kew, C. australis of Argentina to Paraguay, C. datil of eastern Argentina and Uruguay, C. acrocomioides of Mato Grosso do Sul, C. acaulis of Piauí, Goiás and recently collected from the mountains of Paraguay bordering Brazil, and C. geriba (syn. C. martiana) known as a variable species cultivated in gardens throughout Brazil (Rio Grande do Sul, Minas Gerais, Paraná, Rio de Janeiro) and the Mediterranean region. Beccari noted that many of the palms being offered in the catalogues under various species names were actually C. geriba.
In 1912 Alwin Berger reduced the taxon C. plumosa, hitherto still only known from thousands in cultivation around the world yet not known from the wild, to a variety of C. romanzoffiana, as C. romanzoffiana var. plumosa.
It was first moved from the genus Cocos in 1891 by Otto Kuntze in his Revisio Generum Plantarum, which was widely ignored, but in 1916 Beccari raised Arecastrum to a monotypic genus and synonymised all species in the former subgenus to A. romanzoffianum. By this time South American imports of palm seed were being sold across Europe under a plethora of names, according to Beccari often mislabelled but impossible to determine down to 'correct' geographical species, thus he interpreted the taxa to belong to a single extremely variable species. This interpretation was long followed. Beccari also considered C. botryophora part of this species, an interpretation that is now partially rejected. Beccari recognised the following, now rejected, varieties:
Arecastrum romanzoffianum var. australe – from C. australis, C. datil
Arecastrum romanzoffianum var. botryophora – from C. botryophora. As this taxon Beccari (mis)identified plants growing in Rio de Janeiro he earlier considered C. geriba. Synonymy later rejected.
Arecastrum romanzoffianum var. ensifolium – from C. botryophora var. ensifolium of Bahia.
Arecastrum romanzoffianum var. genuinum – nominate form. Includes C. romanzoffiana, C. plumosa, C. geriba, C. martiana.
Arecastrum romanzoffianum var. genuinum subvar. minus – from a dwarf individual plant of uncertain origins in cultivation in a private collection in Hyères, France.
Arecastrum romanzoffianum var. micropindo – from a population of dwarf plants from Paraguay earlier misidentified as C. acaulis.
Beccari also reinstated Martius' Syagrus.
Arecastrum was subsumed under Syagrus in 1968.
A genetics study by Bee F. Gunn found that S. romanzoffiana did not group with the other two Syagrus species tested, but with Lytocaryum weddellianum. If this has merit, then L. weddelianum, being the junior taxon, becomes Arecastrum weddelianum.
Distribution
It occurs from eastern and central Paraguay and northern Argentina north to eastern and southern Brazil and northern Uruguay. It is quite common in its native range.
In Brazil it occurs in the states of Bahia, Distrito Federal, Goiás, Mato Grosso do Sul, Espírito Santo, Minas Gerais, Rio de Janeiro, São Paulo, Paraná, Rio Grande do Sul and Santa Catarina. In Argentina it occurs in the provinces of Buenos Aires, Chaco, Corrientes, Entre Ríos, Formosa, Mendoza, Misiones (El Dorado, Guaraní, Iguazú), Santa Fe, San Juan and San Luis. In Uruguay it occurs in the departments of Maldonado, Montevideo, Rivera, Rocha, Salto, Tacuarembó and Treinta y Tres. In Paraguay it occurs in the departments of Alto Paraná, Amambay, Caaguazú, Canindeyú, Central, Concepción, Cordillera, Guairá, Ñeembucú, Paraguarí and San Pedro.
Non-native distribution
The queen palm is reportedly naturalized to some extent in Florida, Queensland, Australia, Honduras, and the island of Mauritius.
On Mauritius seedlings have been recorded from gardens in the now highly residential area 'Montagne Ory' near the village of Moka from 1981–1984 to at least 1999.
The government of the Australian state of Queensland considers it a potential 'invasive plant', and discourages home-owners from planting it, but it is not prohibited or restricted, or a declared weed. According to the 1989 Flora of Southeastern Queensland, it is naturalised in southern Queensland and the Atherton Tableland.
It is not regarded as an invasive or naturalised in New South Wales, although numerous sightings of it have been recorded around Sydney and the coast, including in nature parks. It has been classified as a noxious weed by one local council in New South Wales since at least 2010; as of 2015 it is not prohibited or restricted in the state, but classified as a 'serious threat ... not widely distributed in the area in one local region. It was possibly first identified as a potential environmental weed for the area in a book from 1998. Sale is discouraged and the palms are being removed.
It is widely planted throughout much of Florida and other parts of the southern United States, although it is not yet widely established in the flora as of 2000.
It can also be found in some parts of the Mediterranean basin.
Ecology
It is a common tree in many habitats.
Birds are recorded to eat the fruit pulp from fallen fruit include the rufous-bellied thrush (Turdus rufiventris), the bananaquit (Coereba flaveola), violaceous euphonia (Euphonia violacea), Brazilian tanager (Ramphocelus bresilius) and tropical parula (Parula pitiayumi). Azure jays (Cyanocorax caeruleus) feed on the fruit pulp both picked directly from the infructescence as well as from fallen fruit lying on the ground, usually swallowing the fruits whole or transporting them away from the tree. The two toucans Ramphastos vitellinus and R. dicolorus pluck ripe fruits directly from the infructescence and regurgitate the seeds, the gamefowl chachalaca Ortalis guttata (or a closely related species, depending on one's taxonomic interpretation) and the two related guan Penelope obscura and P. superciliaris, did so as well, but spread the seeds in their defecations and thus may be important dispersers.
The squirrel Guerlinguetus brasiliensis ssp. ingrami is an important seed predator of this palm where the ranges of the two species overlap; breaking the nut open with its teeth at one of the three pores in the top of the nutshell. It preferentially targets bug-infested nuts. A long term study into feeding behaviour of this squirrel in a secondary Araucaria forest found that although in certain seasons other plants were consumed in larger quantities, the palm nuts were eaten in large quantities throughout the entire year and were thus the most important food item.
Other important seed predators are seed-boring weevils and palm bruchid beetles of the genus Pachymerus. Grubs of P. bactris, P. cardo and P. nucleorum have all been found within the seed of this species (among many other species of related South American palms). The large, colourful weevil Revena rubiginosa appears to be the main seed predator in numerous areas. It is thought to probably be a specialist seed predator of this palm. It infests the developing seeds before the fruits are ripe, while they are still attached to the infructescence, the grubs exiting the seed to pupate underground around the palm when the fruit fall. Other weevils found to be seed predators of this palm are Anchylorhynchus aegrotus and A. variabilis, but these species are also flower visitors and likely important specialized pollinators.
The fruit are eaten by tapirs, which might be important seed dispersers, and some wild canids such as the pampas fox and the crab-eating fox.
Three studies in Brazil, in four locations lacking other large frugivores such as squirrels, peccaries, deer and tapirs, found coati (Nasua nasua) to be important seed dispersers in such areas. The coati climb into the palm to get at the fruit, which in one urban study was found in 10% of all stool samples, although it constituted only 2.5% of the total faecal matter. Other important dispersing mammals were agoutis (Dasyprocta azarae), which sometimes cache seeds. Black-eared opossum (Didelphis aurita) and a russet rice rat (Euryoryzomys russatus) were also found among the fallen fruits.
The leaves of this palm are consumed by the caterpillars of the butterflies Blepolenis batea in Uruguay in 1974, Brassolis astyra ssp. astyra, B. sophorae and Catoblepia amphirhoe in Santa Catarina in 1968, while Opsiphanes invirae, the nominate form or possibly subspecies remoliatus, was recorded feeding on this palm in both these regions. O. quiteria was also recorded feeding on the leaves in Argentina in 1969.
Larvae of the giant day-flying moth Paysandisia archon are known to attack the piths of this palm species, along with many other species, at least in Europe, where neither the moth nor the palm are native. It can kill the palm. It prefers other genera of palm with more hairy trunks like Trachycarpus, Trithrinax or Chamaerops.
The caterpillars of the Indonesian butterfly Cephrenes augiades ssp. augiades and the Australian C. trichopepla may also feed on the leaves of this palm.
The bases of the pruned fronds remain on the tree for several months and could serve as a habitat for insects or snails.
Cultivation and uses
The queen palm is planted in many tropical and subtropical areas. It is popular as an ornamental tree and much used in urban landscaping. It is hardy, to -5 °C (zone 9a), but the dead fronds must be pruned to keep the tree visually pleasing. In some areas the fallen fruit are known for attracting unwelcome insects.
The palm is often cut down in Brazil to use the leaves and inflorescences to provide animal (cattle) fodder, especially in times of drought. The leaves are similarly used in Argentina. Its fruits are edible and sometimes eaten; consisting of a hard nut surrounded with a thin layer of fibrous flesh that is orange and sticky when ripe. The flavour is sweet and could be described as a mixture of plum and banana.
According to Blombery & Rodd [1982] people eat the unexpanded leaves of apical buds in some regions. Fallen fruits are fed to pigs, and palm trunks are often used in construction, frequently hollowed out to make water pipes or aqueducts for irrigation. In 1920s Argentina, it was cultivated as a crop. The young buds are consumed as vegetables, pickled or preserved in oil. The trunk of the palm provides sago.
Gallery
| Biology and health sciences | Arecales (inc. Palms) | Plants |
7376964 | https://en.wikipedia.org/wiki/Clamator | Clamator | Clamator is a genus of large brood-parasitic cuckoos with crests and graduated tails.
The genus was erected by German naturalist Johann Jakob Kaup in 1829 with the great spotted cuckoo (Clamator glandarius) as the type species. The name Clamator is Latin for "he who shouts" from clamare, "to shout".
Species
There are four species:
Distribution
Clamator cuckoos are found in warmer parts of southern Europe and Asia, and in Africa south of the Sahara Desert. These are birds of warm open scrubby habitats, but some species are at least partially migratory, leaving for warmer and wetter areas in winter.
These are large cuckoos, all at least long, with broad chestnut wings and long narrow tails. They are strikingly patterned with black, white and brown plumage. The sexes are similar but the juvenile plumages are distinctive. The two African species each also have two distinct colour morphs, light and dark.
All the Clamator cuckoos are brood parasites, which lay a single egg in the nests of medium-sized hosts, such as magpies, starlings, shrikes, laughingthrushes, bulbuls and babblers, depending on location. Unlike the common cuckoo, neither the hen nor the hatched chick of Clamator species evict the host's eggs, but the host's young often die because they cannot compete successfully with the cuckoo for food.
These are noisy birds, with persistent and loud calls. They feed on large insects, with hairy caterpillars, which are distasteful to many birds, being a specialty.
| Biology and health sciences | Cuculiformes and relatives | Animals |
11038318 | https://en.wikipedia.org/wiki/Methamphetamine | Methamphetamine | Methamphetamine (contracted from ) is a potent central nervous system (CNS) stimulant that is mainly used as a recreational or performance-enhancing drug and less commonly as a second-line treatment for attention deficit hyperactivity disorder (ADHD). It has also been researched as a potential treatment for traumatic brain injury. Methamphetamine was discovered in 1893 and exists as two enantiomers: levo-methamphetamine and dextro-methamphetamine. Methamphetamine properly refers to a specific chemical substance, the racemic free base, which is an equal mixture of levomethamphetamine and dextromethamphetamine in their pure amine forms, but the hydrochloride salt, commonly called crystal meth, is widely used. Methamphetamine is rarely prescribed over concerns involving its potential for recreational use as an aphrodisiac and euphoriant, among other concerns, as well as the availability of safer substitute drugs with comparable treatment efficacy such as Adderall and Vyvanse. While pharmaceutical formulations of methamphetamine in the United States are labeled as methamphetamine hydrochloride, they contain dextromethamphetamine as the active ingredient. Dextromethamphetamine is a stronger CNS stimulant than levomethamphetamine.
Both racemic methamphetamine and dextromethamphetamine are illicitly trafficked and sold owing to their potential for recreational use. The highest prevalence of illegal methamphetamine use occurs in parts of Asia and Oceania, and in the United States, where racemic methamphetamine and dextromethamphetamine are classified as Schedule II controlled substances. Levomethamphetamine is available as an over-the-counter (OTC) drug for use as an inhaled nasal decongestant in the United States. Internationally, the production, distribution, sale, and possession of methamphetamine is restricted or banned in many countries, owing to its placement in schedule II of the United Nations Convention on Psychotropic Substances treaty. While dextromethamphetamine is a more potent drug, racemic methamphetamine is illicitly produced more often, owing to the relative ease of synthesis and regulatory limits of chemical precursor availability.
In low to moderate doses, methamphetamine can elevate mood, increase alertness, concentration and energy in fatigued individuals, reduce appetite, and promote weight loss. At very high doses, it can induce psychosis, breakdown of skeletal muscle, seizures, and bleeding in the brain. Chronic high-dose use can precipitate unpredictable and rapid mood swings, stimulant psychosis (e.g., paranoia, hallucinations, delirium, and delusions), and violent behavior. Recreationally, methamphetamine's ability to increase energy has been reported to lift mood and increase sexual desire to such an extent that users are able to engage in sexual activity continuously for several days while binging the drug. Methamphetamine is known to possess a high addiction liability (i.e., a high likelihood that long-term or high dose use will lead to compulsive drug use) and high dependence liability (i.e., a high likelihood that withdrawal symptoms will occur when methamphetamine use ceases). Discontinuing methamphetamine after heavy use may lead to a post-acute-withdrawal syndrome, which can persist for months beyond the typical withdrawal period. At high doses, methamphetamine is neurotoxic to human midbrain dopaminergic neurons and, to a lesser extent, serotonergic neurons. Methamphetamine neurotoxicity causes adverse changes in brain structure and function, such as reductions in grey matter volume in several brain regions, as well as adverse changes in markers of metabolic integrity.
Methamphetamine belongs to the substituted phenethylamine and substituted amphetamine chemical classes. It is related to the other dimethylphenethylamines as a positional isomer of these compounds, which share the common chemical formula .
Uses
Medical
In the United States, methamphetamine hydrochloride, sold under the brand name Desoxyn, is FDA-approved for the treatment of attention deficit hyperactivity disorder (ADHD); however, the FDA notes that the limited therapeutic usefulness of methamphetamine should be weighed against the risks associated with its use. To avoid toxicity and risk of side effects, FDA guidelines recommend an initial dose of methamphetamine at doses 5–10 mg/day for ADHD in adults and children over six years of age, and may be increased at weekly intervals of 5 mg, up to 25 mg/day, until optimum clinical response is found; the usual effective dose is around 20–25 mg/day. Methamphetamine is sometimes prescribed off-label for obesity, narcolepsy, and idiopathic hypersomnia. In the United States, methamphetamine's levorotary form is available in some over-the-counter (OTC) nasal decongestant products.
Although the pharmaceutical name "methamphetamine hydrochloride" may suggest a racemic mixture, Desoxyn contains enantiopure dextromethamphetamine, which is a more potent stimulant than both levomethamphetamine and racemic methamphetamine. This naming convention deviates from the standard practice observed with other stimulants, such as Adderall and dextroamphetamine, where the dextrorotary enantiomer is explicitly identified as an active ingredient in both generic and brand-name pharmaceuticals.
As methamphetamine is associated with a high potential for misuse, the drug is regulated under the Controlled Substances Act and is listed under Schedule II in the United States. Methamphetamine hydrochloride dispensed in the United States is required to include a boxed warning regarding its potential for recreational misuse and addiction liability.
Desoxyn and Desoxyn Gradumet are both pharmaceutical forms of the drug. The latter is no longer produced and is a extended-release form of the drug, flattening the curve of the effect of the drug while extending it.
Recreational
Methamphetamine is often used recreationally for its effects as a potent euphoriant and stimulant as well as aphrodisiac qualities.
According to a National Geographic TV documentary on methamphetamine, an entire subculture known as party and play is based around sexual activity and methamphetamine use. Participants in this subculture, which consists almost entirely of homosexual male methamphetamine users, will typically meet up through internet dating sites and have sex. Because of its strong stimulant and aphrodisiac effects and inhibitory effect on ejaculation, with repeated use, these sexual encounters will sometimes occur continuously for several days on end. The crash following the use of methamphetamine in this manner is very often severe, with marked hypersomnia (excessive daytime sleepiness). The party and play subculture is prevalent in major US cities such as San Francisco and New York City.
Contraindications
Methamphetamine is contraindicated in individuals with a history of substance use disorder, heart disease, or severe agitation or anxiety, or in individuals currently experiencing arteriosclerosis, glaucoma, hyperthyroidism, or severe hypertension. The FDA states that individuals who have experienced hypersensitivity reactions to other stimulants in the past or are currently taking monoamine oxidase inhibitors should not take methamphetamine. The FDA also advises individuals with bipolar disorder, depression, elevated blood pressure, liver or kidney problems, mania, psychosis, Raynaud's phenomenon, seizures, thyroid problems, tics, or Tourette syndrome to monitor their symptoms while taking methamphetamine. Owing to the potential for stunted growth, the FDA advises monitoring the height and weight of growing children and adolescents during treatment.
Adverse effects
Physical
Cardiovascular
Methamphetamine is a sympathomimetic drug that causes vasoconstriction and tachycardia. Methamphetamine also promotes abnormal extra heart beats and irregular heart rhythms some of which may be life threatening.
Other physical effects
The effects can also include loss of appetite, hyperactivity, dilated pupils, flushed skin, excessive sweating, increased movement, dry mouth and teeth grinding (potentially leading to condition informally known as meth mouth), headache, rapid breathing, high body temperature, diarrhea, constipation, blurred vision, dizziness, twitching, numbness, tremors, dry skin, acne, and pale appearance. Long-term meth users may have sores on their skin; these may be caused by scratching due to itchiness or the belief that insects are crawling under their skin, and the damage is compounded by poor diet and hygiene. Numerous deaths related to methamphetamine overdoses have been reported. Additionally, "[p]ostmortem examinations of human tissues have linked use of the drug to diseases associated with aging, such as coronary atherosclerosis and pulmonary fibrosis", which may be caused "by a considerable rise in the formation of ceramides, pro-inflammatory molecules that can foster cell aging and death."
Dental and oral health ("meth mouth")
Methamphetamine users, particularly heavy users, may lose their teeth abnormally quickly, regardless of the route of administration, from a condition informally known as meth mouth. The condition is generally most severe in users who inject the drug, rather than swallow, smoke, or inhale it. According to the American Dental Association, meth mouth "is probably caused by a combination of drug-induced psychological and physiological changes resulting in xerostomia (dry mouth), extended periods of poor oral hygiene, frequent consumption of high-calorie, carbonated beverages and bruxism (teeth grinding and clenching)". As dry mouth is also a common side effect of other stimulants, which are not known to contribute severe tooth decay, many researchers suggest that methamphetamine-associated tooth decay is more due to users' other choices. They suggest the side effect has been exaggerated and stylized to create a stereotype of current users as a deterrence for new ones.
Sexually transmitted infection
Methamphetamine use was found to be related to higher frequencies of unprotected sexual intercourse in both HIV-positive and unknown casual partners, an association more pronounced in HIV-positive participants. These findings suggest that methamphetamine use and engagement in unprotected anal intercourse are co-occurring risk behaviors, behaviors that potentially heighten the risk of HIV transmission among gay and bisexual men. Methamphetamine use allows users of both sexes to engage in prolonged sexual activity, which may cause genital sores and abrasions as well as priapism in men. Methamphetamine may also cause sores and abrasions in the mouth via bruxism, increasing the risk of sexually transmitted infection.
Besides the sexual transmission of HIV, it may also be transmitted between users who share a common needle. The level of needle sharing among methamphetamine users is similar to that among other drug injection users.
Psychological
The psychological effects of methamphetamine can include euphoria, dysphoria, changes in libido, alertness, apprehension and concentration, decreased sense of fatigue, insomnia or wakefulness, self-confidence, sociability, irritability, restlessness, grandiosity and repetitive and obsessive behaviors. Peculiar to methamphetamine and related stimulants is "punding", persistent non-goal-directed repetitive activity. Methamphetamine use also has a high association with anxiety, depression, amphetamine psychosis, suicide, and violent behaviors.
Neurotoxicity
Methamphetamine is directly neurotoxic to dopaminergic neurons in both lab animals and humans. Excitotoxicity, oxidative stress, metabolic compromise, UPS dysfunction, protein nitration, endoplasmic reticulum stress, p53 expression and other processes contributed to this neurotoxicity. In line with its dopaminergic neurotoxicity, methamphetamine use is associated with a higher risk of Parkinson's disease. In addition to its dopaminergic neurotoxicity, a review of evidence in humans indicated that high-dose methamphetamine use can also be neurotoxic to serotonergic neurons. It has been demonstrated that a high core temperature is correlated with an increase in the neurotoxic effects of methamphetamine. Withdrawal of methamphetamine in dependent persons may lead to post-acute withdrawal which persists months beyond the typical withdrawal period.
Magnetic resonance imaging studies on human methamphetamine users have also found evidence of neurodegeneration, or adverse neuroplastic changes in brain structure and function. In particular, methamphetamine appears to cause hyperintensity and hypertrophy of white matter, marked shrinkage of hippocampi, and reduced gray matter in the cingulate cortex, limbic cortex, and paralimbic cortex in recreational methamphetamine users. Moreover, evidence suggests that adverse changes in the level of biomarkers of metabolic integrity and synthesis occur in recreational users, such as a reduction in N-acetylaspartate and creatine levels and elevated levels of choline and myoinositol.
Methamphetamine has been shown to activate TAAR1 in human astrocytes and generate cAMP as a result. Activation of astrocyte-localized TAAR1 appears to function as a mechanism by which methamphetamine attenuates membrane-bound EAAT2 (SLC1A2) levels and function in these cells.
Methamphetamine binds to and activates both sigma receptor subtypes, σ1 and σ2, with micromolar affinity. Sigma receptor activation may promote methamphetamine-induced neurotoxicity by facilitating hyperthermia, increasing dopamine synthesis and release, influencing microglial activation, and modulating apoptotic signaling cascades and the formation of reactive oxygen species.
Addiction
Current models of addiction from chronic drug use involve alterations in gene expression in certain parts of the brain, particularly the nucleus accumbens. The most important transcription factors that produce these alterations are ΔFosB, cAMP response element binding protein (CREB), and nuclear factor kappa B (NFκB). ΔFosB plays a crucial role in the development of drug addictions, since its overexpression in D1-type medium spiny neurons in the nucleus accumbens is necessary and sufficient for most of the behavioral and neural adaptations that arise from addiction. Once ΔFosB is sufficiently overexpressed, it induces an addictive state that becomes increasingly more severe with further increases in ΔFosB expression. It has been implicated in addictions to alcohol, cannabinoids, cocaine, methylphenidate, nicotine, opioids, phencyclidine, propofol, and substituted amphetamines, among others.
ΔJunD, a transcription factor, and G9a, a histone methyltransferase enzyme, both directly oppose the induction of ΔFosB in the nucleus accumbens (i.e., they oppose increases in its expression). Sufficiently overexpressing ΔJunD in the nucleus accumbens with viral vectors can completely block many of the neural and behavioral alterations seen in chronic drug use (i.e., the alterations mediated by ΔFosB). ΔFosB also plays an important role in regulating behavioral responses to natural rewards, such as palatable food, sex, and exercise. Since both natural rewards and addictive drugs induce expression of ΔFosB (i.e., they cause the brain to produce more of it), chronic acquisition of these rewards can result in a similar pathological state of addiction. ΔFosB is the most significant factor involved in both amphetamine addiction and amphetamine-induced sex addictions, which are compulsive sexual behaviors that result from excessive sexual activity and amphetamine use. These sex addictions (i.e., drug-induced compulsive sexual behaviors) are associated with a dopamine dysregulation syndrome which occurs in some patients taking dopaminergic drugs, such as amphetamine or methamphetamine.
Epigenetic factors
Methamphetamine addiction is persistent for many individuals, with 61% of individuals treated for addiction relapsing within one year. About half of those with methamphetamine addiction continue with use over a ten-year period, while the other half reduce use starting at about one to four years after initial use.
The frequent persistence of addiction suggests that long-lasting changes in gene expression may occur in particular regions of the brain, and may contribute importantly to the addiction phenotype. In 2014, a crucial role was found for epigenetic mechanisms in driving lasting changes in gene expression in the brain.
A review in 2015 summarized a number of studies involving chronic methamphetamine use in rodents. Epigenetic alterations were observed in the brain reward pathways, including areas like ventral tegmental area, nucleus accumbens, and dorsal striatum, the hippocampus, and the prefrontal cortex. Chronic methamphetamine use caused gene-specific histone acetylations, deacetylations and methylations. Gene-specific DNA methylations in particular regions of the brain were also observed. The various epigenetic alterations caused downregulations or upregulations of specific genes important in addiction. For instance, chronic methamphetamine use caused methylation of the lysine in position 4 of histone 3 located at the promoters of the c-fos and the C-C chemokine receptor 2 (ccr2) genes, activating those genes in the nucleus accumbens (NAc). c-fos is well known to be important in addiction. The ccr2 gene is also important in addiction, since mutational inactivation of this gene impairs addiction.
In methamphetamine addicted rats, epigenetic regulation through reduced acetylation of histones, in brain striatal neurons, caused reduced transcription of glutamate receptors. Glutamate receptors play an important role in regulating the reinforcing effects of addictive drugs.
Administration of methamphetamine to rodents causes DNA damage in their brain, particularly in the nucleus accumbens region. During repair of such DNA damages, persistent chromatin alterations may occur such as in the methylation of DNA or the acetylation or methylation of histones at the sites of repair. These alterations can be epigenetic scars in the chromatin that contribute to the persistent epigenetic changes found in methamphetamine addiction.
Treatment and management
A 2018 systematic review and network meta-analysis of 50 trials involving 12 different psychosocial interventions for amphetamine, methamphetamine, or cocaine addiction found that combination therapy with both contingency management and community reinforcement approach had the highest efficacy (i.e., abstinence rate) and acceptability (i.e., lowest dropout rate). Other treatment modalities examined in the analysis included monotherapy with contingency management or community reinforcement approach, cognitive behavioral therapy, 12-step programs, non-contingent reward-based therapies, psychodynamic therapy, and other combination therapies involving these.
, there is no effective pharmacotherapy for methamphetamine addiction. A systematic review and meta-analysis from 2019 assessed the efficacy of 17 different pharmacotherapies used in randomized controlled trials (RCTs) for amphetamine and methamphetamine addiction; it found only low-strength evidence that methylphenidate might reduce amphetamine or methamphetamine self-administration. There was low- to moderate-strength evidence of no benefit for most of the other medications used in RCTs, which included antidepressants (bupropion, mirtazapine, sertraline), antipsychotics (aripiprazole), anticonvulsants (topiramate, baclofen, gabapentin), naltrexone, varenicline, citicoline, ondansetron, prometa, riluzole, atomoxetine, dextroamphetamine, and modafinil.
Dependence and withdrawal
Tolerance is expected to develop with regular methamphetamine use and, when used recreationally, this tolerance develops rapidly. In dependent users, withdrawal symptoms are positively correlated with the level of drug tolerance. Depression from methamphetamine withdrawal lasts longer and is more severe than that of cocaine withdrawal.
According to the current Cochrane review on drug dependence and withdrawal in recreational users of methamphetamine, "when chronic heavy users abruptly discontinue [methamphetamine] use, many report a time-limited withdrawal syndrome that occurs within 24 hours of their last dose". Withdrawal symptoms in chronic, high-dose users are frequent, occurring in up to 87.6% of cases, and persist for three to four weeks with a marked "crash" phase occurring during the first week. Methamphetamine withdrawal symptoms can include anxiety, drug craving, dysphoric mood, fatigue, increased appetite, increased movement or decreased movement, lack of motivation, sleeplessness or sleepiness, and vivid or lucid dreams.
Methamphetamine that is present in a mother's bloodstream can pass through the placenta to a fetus and be secreted into breast milk. Infants born to methamphetamine-abusing mothers may experience a neonatal withdrawal syndrome, with symptoms involving of abnormal sleep patterns, poor feeding, tremors, and hypertonia. This withdrawal syndrome is relatively mild and only requires medical intervention in approximately 4% of cases.
Neonatal
Unlike other drugs, babies with prenatal exposure to methamphetamine do not show immediate signs of withdrawal. Instead, cognitive and behavioral problems start emerging when the children reach school age.
A prospective cohort study of 330 children showed that at the age of 3, children with methamphetamine exposure showed increased emotional reactivity, as well as more signs of anxiety and depression; and at the age of 5, children showed higher rates of externalizing disorders and attention deficit hyperactivity disorder (ADHD).
Overdose
Methamphetamine overdose is a diverse term. It frequently refers to the exaggeration of the unusual effects with features such as irritability, agitation, hallucinations and paranoia. The cardiovascular effects are typically not noticed in young healthy people. Hypertension and tachycardia are not apparent unless measured. A moderate overdose of methamphetamine may induce symptoms such as: abnormal heart rhythm, confusion, difficult and/or painful urination, high or low blood pressure, high body temperature, over-active and/or over-responsive reflexes, muscle aches, severe agitation, rapid breathing, tremor, urinary hesitancy, and an inability to pass urine. An extremely large overdose may produce symptoms such as adrenergic storm, methamphetamine psychosis, substantially reduced or no urine output, cardiogenic shock, bleeding in the brain, circulatory collapse, hyperpy rexia (i.e., dangerously high body temperature), pulmonary hypertension, kidney failure, rapid muscle breakdown, serotonin syndrome, and a form of stereotypy ("tweaking"). A methamphetamine overdose will likely also result in mild brain damage owing to dopaminergic and serotonergic neurotoxicity. Death from methamphetamine poisoning is typically preceded by convulsions and coma.
Psychosis
Use of methamphetamine can result in a stimulant psychosis which may present with a variety of symptoms (e.g., paranoia, hallucinations, delirium, and delusions). A Cochrane Collaboration review on treatment for amphetamine, dextroamphetamine, and methamphetamine use-induced psychosis states that about 5–15% of users fail to recover completely. The same review asserts that, based upon at least one trial, antipsychotic medications effectively resolve the symptoms of acute amphetamine psychosis. Amphetamine psychosis may also develop occasionally as a treatment-emergent side effect.
Death from overdose
The CDC reported that the number of deaths in the United States involving psychostimulants with abuse potential to be 23,837 in 2020 and 32,537 in 2021. This category code (ICD–10 of T43.6) includes primarily methamphetamine but also other stimulants such as amphetamine, and methylphenidate. The mechanism of death in these cases is not reported in these statistics and is difficult to know. Unlike fentanyl which causes respiratory depression, methamphetamine is not a respiratory depressant. Some deaths are as a result of intracranial hemorrhage and some deaths are cardiovascular in nature including flash pulmonary edema and ventricular fibrillation.
Emergency treatment
Acute methamphetamine intoxication is largely managed by treating the symptoms and treatments may initially include administration of activated charcoal and sedation. There is not enough evidence on hemodialysis or peritoneal dialysis in cases of methamphetamine intoxication to determine their usefulness. Forced acid diuresis (e.g., with vitamin C) will increase methamphetamine excretion but is not recommended as it may increase the risk of aggravating acidosis, or cause seizures or rhabdomyolysis. Hypertension presents a risk for intracranial hemorrhage (i.e., bleeding in the brain) and, if severe, is typically treated with intravenous phentolamine or nitroprusside. Blood pressure often drops gradually following sufficient sedation with a benzodiazepine and providing a calming environment.
Antipsychotics such as haloperidol are useful in treating agitation and psychosis from methamphetamine overdose. Beta blockers with lipophilic properties and CNS penetration such as metoprolol and labetalol may be useful for treating CNS and cardiovascular toxicity. The mixed alpha- and beta-blocker labetalol is especially useful for treatment of concomitant tachycardia and hypertension induced by methamphetamine. The phenomenon of "unopposed alpha stimulation" has not been reported with the use of beta-blockers for treatment of methamphetamine toxicity.
Interactions
Methamphetamine is metabolized by the liver enzyme CYP2D6, so CYP2D6 inhibitors will prolong the elimination half-life of methamphetamine. Methamphetamine also interacts with monoamine oxidase inhibitors (MAOIs), since both MAOIs and methamphetamine increase plasma catecholamines; therefore, concurrent use of both is dangerous. Methamphetamine may decrease the effects of sedatives and depressants and increase the effects of antidepressants and other stimulants as well. Methamphetamine may counteract the effects of antihypertensives and antipsychotics owing to its effects on the cardiovascular system and cognition respectively. The pH of gastrointestinal content and urine affects the absorption and excretion of methamphetamine. Specifically, acidic substances will reduce the absorption of methamphetamine and increase urinary excretion, while alkaline substances do the opposite. Owing to the effect pH has on absorption, proton pump inhibitors, which reduce gastric acid, are known to interact with methamphetamine. Norepinephrine reuptake inhibitors (NRIs) like atomoxetine prevent norepinephrine release induced by amphetamines and have been found to reduce the stimulant, euphoriant, and sympathomimetic effects of dextroamphetamine in humans. Similarly, norepinephrine–dopamine reuptake inhibitors (NRIs) like methylphenidate and bupropion prevent norepinephrine and dopamine release induced by amphetamines and bupropion has been found to reduce the subjective and sympathomimetic effects of methamphetamine in humans.
Pharmacology
Pharmacodynamics
Methamphetamine has been identified as a potent full agonist of trace amine-associated receptor 1 (TAAR1), a G protein-coupled receptor (GPCR) that regulates brain catecholamine systems. Activation of TAAR1 increases cyclic adenosine monophosphate (cAMP) production and either completely inhibits or reverses the transport direction of the dopamine transporter (DAT), norepinephrine transporter (NET), and serotonin transporter (SERT). When methamphetamine binds to TAAR1, it triggers transporter phosphorylation via protein kinase A (PKA) and protein kinase C (PKC) signaling, ultimately resulting in the internalization or reverse function of monoamine transporters. Methamphetamine is also known to increase intracellular calcium, an effect which is associated with DAT phosphorylation through a Ca2+/calmodulin-dependent protein kinase (CAMK)-dependent signaling pathway, in turn producing dopamine efflux. TAAR1 has been shown to reduce the firing rate of neurons through direct activation of G protein-coupled inwardly-rectifying potassium channels. TAAR1 activation by methamphetamine in astrocytes appears to negatively modulate the membrane expression and function of EAAT2, a type of glutamate transporter.
In addition to its effect on the plasma membrane monoamine transporters, methamphetamine inhibits synaptic vesicle function by inhibiting VMAT2, which prevents monoamine uptake into the vesicles and promotes their release. This results in the outflow of monoamines from synaptic vesicles into the cytosol (intracellular fluid) of the presynaptic neuron, and their subsequent release into the synaptic cleft by the phosphorylated transporters. Other transporters that methamphetamine is known to inhibit are SLC22A3 and SLC22A5. SLC22A3 is an extraneuronal monoamine transporter that is present in astrocytes, and SLC22A5 is a high-affinity carnitine transporter.
Methamphetamine is also an agonist of the alpha-2 adrenergic receptors and sigma receptors with a greater affinity for σ1 than σ2, and inhibits monoamine oxidase A (MAO-A) and monoamine oxidase B (MAO-B). Sigma receptor activation by methamphetamine may facilitate its central nervous system stimulant effects and promote neurotoxicity within the brain. Dextromethamphetamine is a stronger psychostimulant, but levomethamphetamine has stronger peripheral effects, a longer half-life, and longer perceived effects among heavy substance users. At high doses, both enantiomers of methamphetamine can induce similar stereotypy and methamphetamine psychosis, but levomethamphetamine has shorter psychodynamic effects.
Pharmacokinetics
The bioavailability of methamphetamine is 67% orally, 79% intranasally, 67 to 90% via inhalation (smoking), and 100% intravenously. Following oral administration, methamphetamine is well-absorbed into the bloodstream, with peak plasma methamphetamine concentrations achieved in approximately 3.13–6.3 hours post ingestion. Methamphetamine is also well absorbed following inhalation and following intranasal administration. Because of the high lipophilicity of methamphetamine due to its methyl group, it can readily move through the blood–brain barrier faster than other stimulants, where it is more resistant to degradation by monoamine oxidase. The amphetamine metabolite peaks at 10–24 hours. Methamphetamine is excreted by the kidneys, with the rate of excretion into the urine heavily influenced by urinary pH. When taken orally, 30–54% of the dose is excreted in urine as methamphetamine and 10–23% as amphetamine. Following IV doses, about 45% is excreted as methamphetamine and 7% as amphetamine. The elimination half-life of methamphetamine varies with a range of 5–30hours, but it is on average 9 to 12hours in most studies. The elimination half-life of methamphetamine does not vary by route of administration, but is subject to substantial interindividual variability.
CYP2D6, dopamine β-hydroxylase, flavin-containing monooxygenase 3, butyrate-CoA ligase, and glycine N-acyltransferase are the enzymes known to metabolize methamphetamine or its metabolites in humans. The primary metabolites are amphetamine and 4-hydroxymethamphetamine; other minor metabolites include: , , , benzoic acid, hippuric acid, norephedrine, and phenylacetone, the metabolites of amphetamine. Among these metabolites, the active sympathomimetics are amphetamine, , , , and norephedrine. Methamphetamine is a CYP2D6 inhibitor.
The main metabolic pathways involve aromatic para-hydroxylation, aliphatic alpha- and beta-hydroxylation, N-oxidation, N-dealkylation, and deamination. The known metabolic pathways include:
Detection in biological fluids
Methamphetamine and amphetamine are often measured in urine or blood as part of a drug test for sports, employment, poisoning diagnostics, and forensics. Chiral techniques may be employed to help distinguish the source of the drug to determine whether it was obtained illicitly or legally via prescription or prodrug. Chiral separation is needed to assess the possible contribution of levomethamphetamine, which is an active ingredients in some OTC nasal decongestants, toward a positive test result. Dietary zinc supplements can mask the presence of methamphetamine and other drugs in urine.
Chemistry
Methamphetamine is a chiral compound with two enantiomers, dextromethamphetamine and levomethamphetamine. At room temperature, the free base of methamphetamine is a clear and colorless liquid with an odor characteristic of geranium leaves. It is soluble in diethyl ether and ethanol as well as miscible with chloroform.
In contrast, the methamphetamine hydrochloride salt is odorless with a bitter taste. It has a melting point between and, at room temperature, occurs as white crystals or a white crystalline powder. The hydrochloride salt is also freely soluble in ethanol and water. The crystal structure of either enantiomer is monoclinic with P21 space group; at , it has lattice parameters a = 7.10 Å, b = 7.29 Å, c = 10.81 Å, and β = 97.29°.
Degradation
A 2011 study into the destruction of methamphetamine using bleach showed that effectiveness is correlated with exposure time and concentration. A year-long study (also from 2011) showed that methamphetamine in soils is a persistent pollutant. In a 2013 study of bioreactors in wastewater, methamphetamine was found to be largely degraded within 30 days under exposure to light.
Synthesis
Racemic methamphetamine may be prepared starting from phenylacetone by either the Leuckart or reductive amination methods. In the Leuckart reaction, one equivalent of phenylacetone is reacted with two equivalents of to produce the formyl amide of methamphetamine plus carbon dioxide and methylamine as side products. In this reaction, an iminium cation is formed as an intermediate which is reduced by the second equivalent of . The intermediate formyl amide is then hydrolyzed under acidic aqueous conditions to yield methamphetamine as the final product. Alternatively, phenylacetone can be reacted with methylamine under reducing conditions to yield methamphetamine.
History, society, and culture
Amphetamine, discovered before methamphetamine, was first synthesized in 1887 in Germany by Romanian chemist Lazăr Edeleanu who named it phenylisopropylamine. Shortly after, methamphetamine was synthesized from ephedrine in 1893 by Japanese chemist Nagai Nagayoshi. Three decades later, in 1919, methamphetamine hydrochloride was synthesized by pharmacologist Akira Ogata via reduction of ephedrine using red phosphorus and iodine.
From 1938, methamphetamine was marketed on a large scale in Germany as a nonprescription drug under the brand name Pervitin, produced by the Berlin-based Temmler pharmaceutical company. It was used by all branches of the combined armed forces of the Third Reich, for its stimulant effects and to induce extended wakefulness. Pervitin became colloquially known among the German troops as "Stuka-Tablets" (Stuka-Tabletten) and "Herman-Göring-Pills" (Hermann-Göring-Pillen), as a snide allusion to Göring's widely-known addiction to drugs. However, the side effects, particularly the withdrawal symptoms, were so serious that the army sharply cut back its usage in 1940. By 1941, usage was restricted to a doctor's prescription, and the military tightly controlled its distribution. Soldiers would only receive a couple of tablets at a time, and were discouraged from using them in combat. Historian Łukasz Kamieński says,
Some soldiers turned violent, committing war crimes against civilians; others attacked their own officers. At the end of the war, it was used as part of a new drug: D-IX.
Obetrol, patented by Obetrol Pharmaceuticals in the 1950s and indicated for treatment of obesity, was one of the first brands of pharmaceutical methamphetamine products. Because of the psychological and stimulant effects of methamphetamine, Obetrol became a popular diet pill in America in the 1950s and 1960s. Eventually, as the addictive properties of the drug became known, governments began to strictly regulate the production and distribution of methamphetamine. For example, during the early 1970s in the United States, methamphetamine became a schedule II controlled substance under the Controlled Substances Act. Currently, methamphetamine is sold under the trade name Desoxyn, trademarked by the Danish pharmaceutical company Lundbeck. As of January 2013, the Desoxyn trademark had been sold to Italian pharmaceutical company Recordati.
Trafficking
The Golden Triangle (Southeast Asia), specifically Shan State, Myanmar, is the world's leading producer of methamphetamine as production has shifted to Yaba and crystalline methamphetamine, including for export to the United States and across East and Southeast Asia and the Pacific.
Concerning the accelerating synthetic drug production in the region, the Cantonese Chinese syndicate Sam Gor, also known as The Company, is understood to be the main international crime syndicate responsible for this shift. It is made up of members of five different triads. Sam Gor is primarily involved in drug trafficking, earning at least $8 billion per year. Sam Gor is alleged to control 40% of the Asia-Pacific methamphetamine market, while also trafficking heroin and ketamine. The organization is active in a variety of countries, including Myanmar, Thailand, New Zealand, Australia, Japan, China, and Taiwan. Sam Gor previously produced meth in Southern China and is now believed to manufacture mainly in the Golden Triangle, specifically Shan State, Myanmar, responsible for much of the massive surge of crystal meth in circa 2019. The group is understood to be headed by Tse Chi Lop, a gangster born in Guangzhou, China who also holds a Canadian passport.
Liu Zhaohua was another individual involved in the production and trafficking of methamphetamine until his arrest in 2005. It was estimated over 18 tonnes of methamphetamine were produced under his watch.
Legal status
The production, distribution, sale, and possession of methamphetamine is restricted or illegal in many jurisdictions. In some jurisdictions, it is legally available as a prescription medication. Methamphetamine has been placed in schedule II of the United Nations Convention on Psychotropic Substances treaty, indicating that it has limited medical use.
Research
Animal models have shown that low-dose methamphetamine improves cognitive and behavioural functioning following TBI (traumatic brain injury). This is in contrast to high, repeated doses which cause neurotoxicity. These models demonstrate that low-dose methamphetamine increases neurogenesis and reduces apoptosis in the dentate gyrus of the hippocampus following TBI. It has also been found that TBI patients testing positive for methamphetamine at the time of emergency department admission have lower rates of mortality.
It has been suggested, based on animal research, that calcitriol, the active metabolite of vitamin D, can provide significant protection against the DA- and 5-HT-depleting effects of neurotoxic doses of methamphetamine. Protection against methamphetamine-induced neurotoxicity has also been observed following administration of ascorbic acid (vitamin C), cobalamin (vitamin B12), and vitamin E.
| Biology and health sciences | Specific drugs | Health |
11038534 | https://en.wikipedia.org/wiki/End-of-life%20care | End-of-life care | End-of-life care (EOLC) is health care provided in the time leading up to a person's death. End-of-life care can be provided in the hours, days, or months before a person dies and encompasses care and support for a person's mental and emotional needs, physical comfort, spiritual needs, and practical tasks.
EoLC is most commonly provided at home, in the hospital, or in a long-term care facility with care being provided by family members, nurses, social workers, physicians, and other support staff. Facilities may also have palliative or hospice care teams that will provide end-of-life care services. Decisions about end-of-life care are often informed by medical, financial and ethical considerations.
In most developed countries, medical spending on people in the last twelve months of life makes up roughly 10% of total aggregate medical spending, while those in the last three years of life can cost up to 25%.
Medical
Advanced care planning
Advances in medicine in the last few decades have provided an increasing number of options to extend a person's life and highlighted the importance of ensuring that an individual's preferences and values for end-of-life care are honored. Advanced care planning is the process by which a person of any age is able to provide their preferences and ensure that their future medical treatment aligns with their personal values and life goals.
It is typically a continual process, with ongoing discussions about a patient's current prognosis and conditions as well as conversations about medical dilemmas and options. A person will typically have these conversations with their doctor and ultimately record their preferences in an advance healthcare directive. An advance healthcare directive is a legal document that either documents a person's decisions about desired treatment or indicates who a person has entrusted to make their care decisions for them. The two main types of advanced directives are a living will and durable power of attorney for healthcare. A living will includes a person's decisions regarding their future care, a majority of which address resuscitation and life support but may also delve into a patient's preferences regarding hospitalization, pain control, and specific treatments that they may undergo in the future. The living will will typically take effect when a patient is terminally ill with low chances of recovery. A durable power of attorney for healthcare allows a person to appoint another individual to make healthcare decisions for them under a specified set of circumstances. Combined directives, such as the "Five Wishes", that include components of both the living will and durable power of attorney for healthcare, are being increasingly utilized.
Advanced care planning often includes preferences for CPR initiation, nutrition (tube feeding), as well as decisions about the use of machines to keep a person breathing, or support their heart or kidneys. Many studies have reported benefits to patients who complete advanced care planning, specifically noting the improved patient and surrogate satisfaction with communication and decreased clinician distress. However, there is a notable lack of empirical data about what outcome improvements patients experience, as there are considerable discrepancies in what constitutes as advanced care planning and heterogeneity in the outcomes measured. Advanced care planning remains an underutilized tool for patients. Researchers have published data to support the use of new relationship-based and supported decision making models that can increase the use and maximize the benefit of advanced care planning.
End-of-life care conversations
End-of-life care conversations are part of the treatment planning process for terminally ill patients requiring palliative care involving a discussion of a patient's prognosis, specification of goals of care, and individualized treatment planning. A recent Cochrane review (2022) set forth to review the effectiveness of interpersonal communication interventions during end-of-life care. Research suggest that many patients prioritize proper symptom management, avoidance of suffering, and care that aligns with ethical and cultural standards. Specific conversations can include discussions about cardiopulmonary resuscitation (ideally occurring before the active dying phase as to not force the conversation during a medical crisis/emergency), place of death, organ donation, and cultural/religious traditions. As there are many factors involved in the end-of-life care decision-making process, the attitudes and perspectives of patients and families may vary. For example, family members may differ over whether life extension or life quality is the main goal of treatment. As it can be challenging for families in the grieving process to make timely decisions that respect the patient's wishes and values, having an established advanced care directive in place can prevent over-treatment, under-treatment, or further complications in treatment management.
Patients and families may also struggle to grasp the inevitability of death, and the differing risks and effects of medical and non-medical interventions available for end-of-life care. People might avoid discussing their end-of-life care, and often the timing and quality of these discussions can be poor. For example the conversations regarding end-of-life care between COPD patients and clinicians often occur when a person with COPD has advanced stage disease and occur at a low frequency. To prevent interventions that are not in accordance with the patient's wishes, end-of-life care conversations and advanced care directives can allow for the care they desire, as well as help prevent confusion and strain for family members.
In the case of critically ill babies, parents are able to participate more in decision making if they are presented with options to be discussed rather than recommendations by the doctor. Utilizing this style of communication also leads to less conflict with doctors and might help the parents cope better with the eventual outcomes.
Signs of dying
The National Cancer Institute in the United States advises that the presence of some of the following signs may indicate that death is approaching:
Drowsiness, increased sleep, and/or unresponsiveness (caused by changes in the patient's metabolism).
Confusion about time, place, and/or identity of loved ones; restlessness; visions of people and places that are not present; pulling at bed linen or clothing (caused in part by changes in the patient's metabolism).
Decreased socialization and withdrawal (caused by decreased oxygen to the brain, decreased blood flow, and mental preparation for dying).
Changes in breathing (indicate neurologic compromise and impending death) and accumulation of upper airway secretions (resulting in crackling and gurgling breath sounds)
Decreased need for food and fluids, and loss of appetite (caused by the body's need to conserve energy and its decreasing ability to use food and fluids properly).
Decreased oral intake and impaired swallowing (caused by general physical weakness and metabolic disturbances, including but not limited to hypercalcemia)
Loss of bladder or bowel control (caused by the relaxing of muscles in the pelvic area).
Darkened urine or decreased amount of urine (caused by slowing of kidney function and/or decreased fluid intake).
Skin becoming cool to the touch, particularly the hands and feet; skin may become bluish in color, especially on the underside of the body (caused by decreased circulation to the extremities).
Rattling or gurgling sounds while breathing, which may be loud (death rattle); breathing that is irregular and shallow; decreased number of breaths per minute; breathing that alternates between rapid and slow (caused by congestion from decreased fluid consumption, a buildup of waste products in the body, and/or a decrease in circulation to the organs).
Turning of the head toward a light source (caused by decreasing vision).
Increased difficulty controlling pain (caused by progression of the disease).
Involuntary movements (called myoclonus)
Increased heart rate
Hypertension followed by hypotension
Loss of reflexes in the legs and arms
Symptoms management
The following are some of the most common potential problems that can arise in the last days and hours of a patient's life:
Pain
Typically controlled with opioids, like morphine, fentanyl, hydromorphone or, in the United Kingdom, diamorphine. High doses of opioids can cause respiratory depression, and this risk increases with concomitant use of alcohol and other sedatives. Careful use of opioids is important to improve the patient's quality of life while avoiding overdoses.
Agitation
Delirium, terminal anguish, restlessness (e.g. thrashing, plucking, or twitching). Typically controlled using clonazepam or midazolam,antipsychotics such as haloperidol or levomepromazine may also be used instead of, or concomitantly with benzodiazepines. Symptoms may also sometimes be alleviated by rehydration, which may reduce the effects of some toxic drug metabolites.
Respiratory tract secretions
Saliva and other fluids can accumulate in the oropharynx and upper airways when patients become too weak to clear their throats, leading to a characteristic gurgling or rattle-like sound ("death rattle"). While apparently not painful for the patient, the association of this symptom with impending death can create fear and uncertainty for those at the bedside. The secretions may be controlled using drugs such as hyoscine butylbromide, glycopyrronium, or atropine. Rattle may not be controllable if caused by deeper fluid accumulation in the bronchi or the lungs, such as occurs with pneumonia or some tumours.
Nausea and vomiting
Typically controlled using haloperidol, metoclopramide, ondansetron, cyclizine; or other anti-emetics (sometimes levomepromazine is used as second-line to alleviate both agitation and of nausea and vomiting).
Dyspnea (breathlessness)
Typically controlled with opioids, like morphine, fentanyl or, in the United Kingdom, diamorphine
Constipation
Low food intake and opioid use can lead to constipation which can then result in agitation, pain, and delirium. Laxatives and stool softeners are used to prevent constipation. In patients with constipation, the dose of laxatives will be increased to relieve symptoms. Methylnaltrexone is approved to treat constipation due to opioid use.
Other symptoms that may occur, and may be mitigated to some extent, include cough, fatigue, fever, and in some cases bleeding.
Medication administration
A subcutaneous injection is one preferred route of delivery of medications when it has become difficult for patients to swallow or to take pills orally, and if repeated medication is needed, a syringe driver (or infusion pump in the US) is often likely to be used, to deliver a steady low dose of medication. In some settings, such as the home or hospice, sublingual routes of administration may be used for most prescriptions and medications.
Another means of medication delivery, available for use when the oral route is compromised, is a specialized catheter designed to provide comfortable and discreet administration of ongoing medications via the rectal route. The catheter was developed to make rectal access more practical and provide a way to deliver and retain liquid formulations in the distal rectum so that health practitioners can leverage the established benefits of rectal administration. Its small flexible silicone shaft allows the device to be placed safely and remain comfortably in the rectum for repeated administration of medications or liquids. The catheter has a small lumen, allowing for small flush volumes to get medication to the rectum. Small volumes of medications (under 15mL) improve comfort by not stimulating the defecation response of the rectum and can increase the overall absorption of a given dose by decreasing pooling of medication and migration of medication into more proximal areas of the rectum where absorption can be less effective.
Integrated pathways
Integrated care pathways are an organizational tool used by healthcare professionals to clearly define the roles of each team-member and coordinate how and when care will be provided. These pathways are utilized to ensure best practices are being utilized for end-of-life care, such as evidence-based and accepted health care protocols, and to list the required features of care for a specific diagnosis or clinical problem. Many institutions have a predetermined pathway for end of life care, and clinicians should be aware of and make use of these plans when possible.
In the United Kingdom, end-of-life care pathways are based on the Liverpool Care Pathway. Originally developed to provide evidence based care to dying cancer patients, this pathway has been adapted and used for a variety of chronic conditions at clinics in the UK and internationally. Despite its increasing popularity, the 2016 Cochrane Review, which only analyzed one trial, showed limited evidence in the form of high-quality randomized clinical trials to measure the effectiveness of end-of-life care pathways on clinical outcomes, physical outcomes, and emotional/psychological outcomes.
The BEACON Project group developed an integrated care pathway entitled the Comfort Care Order Set, which delineates care for the last days of life in either a hospice or acute care inpatient setting. This order set was implemented and evaluated in a multisite system throughout six United States Veterans Affairs Medical Centers, and the study found increased orders for opioid medication post-pathway implementation, as well as more orders for antipsychotic medications, more patients undergoing palliative care consultations, more advance directives, and increased sublingual drug administration. The intervention did not, however, decrease the proportion of deaths that occurred in an ICU setting or the utilization of restraints around death.
Home-based end-of-life care
While not possible for every person needing care, surveys of the general public suggest most people would prefer to die at home. In the period from 2003 to 2017, the number of deaths at home in the United States increased from 23.8% to 30.7%, while the number of deaths in the hospital decreased from 39.7% to 29.8%. Home-based end-of-life care may be delivered in a number of ways, including by an extension of a primary care practice, by a palliative care practice, and by home care agencies such as Hospice. High-certainty evidence indicates that implementation of home-based end-of-life care programs increases the number of adults who will die at home and slightly improves their satisfaction at a one-month follow-up. There is low-certainty evidence that there may be very little or no difference in satisfaction of the person needing care for longer term (6 months). The number of people who are admitted to hospital during an end-of-life care program is not known. In addition, the impact of home-based end-of-life care on caregivers, healthcare staff, and health service costs is not clear, however, there is weak evidence to suggest that this intervention may reduce health care costs by a small amount.
Disparities in end-of-life care
Not all groups in society have good access to end-of-life care. A systematic review conducted in 2021 investigated the end of life care experiences of people with severe mental illness, including those with schizophrenia, bipolar disorder, and major depressive disorder. The research found that individuals with a severe mental illness were unlikely to receive the most appropriate end of life care. The review recommended that there needs to be close partnerships and communication between mental health and end of life care systems, and these teams need to find ways to support people to die where they choose. More training, support and supervision needs to be available for professionals working in end of life care; this could also decrease prejudice and stigma against individuals with severe mental illness at the end of life, notably in those who are homeless. In addition, studies have shown that minority patients face several additional barriers to receiving quality end-of-life care. Minority patients are prevented from accessing care at an equitable rate for a variety of reasons including: individual discrimination from caregivers, cultural insensitivity, racial economic disparities, as well as medical mistrust.
Non-medical
Family and friends
Family members are often uncertain as to what they should be doing when a person is dying. Many gentle, familiar daily tasks, such as combing hair, putting lotion on delicate skin, and holding hands, are comforting and provide a meaningful method of communicating love to a dying person.
Family members may be suffering emotionally due to the impending death. Their own fear of death may affect their behavior. They may feel guilty about past events in their relationship with the dying person or feel that they have been neglectful. These common emotions can result in tension, fights between family members over decisions, worsened care, and sometimes (in what medical professionals call the "Daughter from California syndrome") a long-absent family member arrives while a patient is dying to demand inappropriately aggressive care.
Family members may also be coping with unrelated problems, such as physical or mental illness, emotional and relationship issues, or legal difficulties. These problems can limit their ability to be involved, civil, helpful, or present.
Spirituality and religion
Spirituality is thought to be of increased importance to an individual's wellbeing during a terminal illness or toward the end-of-life. Pastoral/spiritual care has a particular significance in end of life care, and is considered an essential part of palliative care by the WHO. In palliative care, responsibility for spiritual care is shared by the whole team, with leadership given by specialist practitioners such as pastoral care workers. The palliative care approach to spiritual care may, however, be transferred to other contexts and to individual practice.
Spiritual, cultural, and religious beliefs may influence or guide patient preferences regarding end-of-life care. Healthcare providers caring for patients at the end of life can engage family members and encourage conversations about spiritual practices to better address the different needs of diverse patient populations. Studies have shown that people who identify as religious also report higher levels of well-being. Religion has also been shown to be inversely correlated with depression and suicide. While religion provides some benefits to patients, there is some evidence of increased anxiety and other negative outcomes in some studies. While spirituality has been associated with less aggressive end-of-life care, religion has been associated with an increased desire for aggressive care in some patients. Despite these varied outcomes, spiritual and religious care remains an important aspect of care for patients. Studies have shown that barriers to providing adequate spiritual and religious care include a lack of cultural understanding, limited time, and a lack of formal training or experience.
Many hospitals, nursing homes, and hospice centers have chaplains who provide spiritual support and grief counseling to patients and families of all religious and cultural backgrounds.
Ageism
The World Health Organization defines ageism as "the stereotypes (how we think), prejudice (how we feel) and discrimination (how we act) towards others or ourselves based on age." A systematic review in 2017 showed that negative attitudes amongst nurses towards older individuals were related to the characteristics of the older adults and their demands. This review also highlighted how nurses who had difficulty giving care to their older patients perceived them as "weak, disabled, inflexible, and lacking cognitive or mental ability". Another systematic review considering structural and individual-level effects of ageism found that ageism led to significantly worse health outcomes in 95.5% of the studies and 74.0% of the 1,159 ageism-health associations examined. Studies have also shown that one's own perception of aging and internalized ageism negatively impacts their health. In the same systematic review, they included this factor as part of their research. It was concluded that 93.4% of their total 142 associations about self-perceptions of aging show significant associations between ageism and worse health.
Attitudes of healthcare professionals
End-of-life care is an interdisciplinary endeavor involving physicians, nurses, physical therapists, occupational therapists, pharmacists and social workers. Depending on the facility and level of care needed, the composition of the interprofessional team can vary. Health professional attitudes about end-of-life care depend in part on the provider's role in the care team.
Physicians generally have favorable attitudes towards Advance Directives, which are a key facet of end-of-life care. Medical doctors who have more experience and training in end-of-life care are more likely to cite comfort in having end-of-life-care discussions with patients. Those physicians who have more exposure to end-of-life care also have a higher likelihood of involving nurses in their decision-making process.
A systematic review assessing end-of-life conversations between heart failure patients and healthcare professionals evaluated physician attitudes and preferences towards end-of-life care conversations. The study found that physicians found difficulty initiating end-of-life conversations with their heart failure patients, due to physician apprehension over inducing anxiety in patients, the uncertainty in a patient's prognosis, and physicians awaiting patient cues to initiate end-of-life care conversations.
Although physicians make official decisions about end-of-life care, nurses spend more time with patients and often know more about patient desires and concerns. In a Dutch national survey study of attitudes of nursing staff about involvement in medical end-of-life decisions, 64% of respondents thought patients preferred talking with nurses than physicians and 75% desired to be involved in end-of-life decision making.
By country
Canada
In 2012, Statistics Canada's General Social Survey on Caregiving and care receiving found that 13% of Canadians (3.7 million) aged 15 and older reported that at some point in their lives they had provided end-of-life or palliative care to a family member or friend. For those in their 50s and 60s, the percentage was higher, with about 20% reporting having provided palliative care to a family member or friend. Women were also more likely to have provided palliative care over their lifetimes, with 16% of women reporting having done so, compared with 10% of men. These caregivers helped terminally ill family members or friends with personal or medical care, food preparation, managing finances or providing transportation to and from medical appointments.
United Kingdom
End of life care has been identified by the UK Department of Health as an area where quality of care has previously been "very variable," and which has not had a high profile in the NHS and social care. To address this, a national end of life care programme was established in 2004 to identify and propagate best practice, and a national strategy document published in 2008. The Scottish Government has also published a national strategy.
In 2006 just over half a million people died in England, about 99% of them adults over the age of 18, and almost two-thirds adults over the age of 75. About three-quarters of deaths could be considered "predictable" and followed a period of chronic illness – for example heart disease, cancer, stroke, or dementia. In all, 58% of deaths occurred in an NHS hospital, 18% at home, 17% in residential care homes (most commonly people over the age of 85), and about 4% in hospices. However, a majority of people would prefer to die at home or in a hospice, and according to one survey less than 5% would rather die in hospital. A key aim of the strategy therefore is to reduce the needs for dying patients to have to go to hospital and/or to have to stay there; and to improve provision for support and palliative care in the community to make this possible. One study estimated that 40% of the patients who had died in hospital had not had medical needs that required them to be there.
In 2015 and 2010, the UK ranked highest globally in a study of end-of-life care. The 2015 study said "Its ranking is due to comprehensive national policies, the extensive integration of palliative care into the National Health Service, a strong hospice movement, and deep community engagement on the issue." The studies were carried out by the Economist Intelligence Unit and commissioned by the Lien Foundation, a Singaporean philanthropic organisation.
The 2015 National Institute for Health and Care Excellence guidelines introduced religion and spirituality among the factors which physicians shall take into account for assessing palliative care needs. In 2016, the UK Minister of Health signed a document which declared people "should have access to personalised care which focuses on the preferences, beliefs and spiritual needs of the individual." As of 2017, more than 47% of the 500,000 deaths in the UK occurred in hospitals.
In 2021 the National Palliative and End of Life Care Partnership published their six ambitions for 2021–26. These include fair access to end of life care for everyone regardless of who they are, where they live or their circumstances, and the need to maximise comfort and wellbeing. Informed and timely conversations are also highlighted.
Research funded by the UK's National Institute for Health and Care Research (NIHR) has addressed these areas of need. Examples highlight inequalities faced by several groups and offers recommendations. These include the need for close partnership between services caring for people with severe mental illness, improved understanding of barriers faced by Gypsy, Traveller and Roma communities, the provision of flexible palliative care services for children from ethnic minorities or deprived areas.
Other research suggests that giving nurses and pharmacists easier access to electronic patient records about prescribing could help people manage their symptoms at home. A named professional to support and guide patients and carers through the healthcare system could also improve the experience of care at home at the end of life. A synthesised review looking at palliative care in the UK created a resource showing which services were available and grouped them according to their intended purpose and benefit to the patient. They also stated that currently in the UK palliative services are only available to patients with a timeline to death, usually 12 months or less. They found these timelines to often be inaccurate and created barriers to patients accessing appropriate services. They call for a more holistic approach to end of life care which is not restricted by arbitrary timelines.
United States
As of 2019, physician-assisted dying is legal in eight states (California, Colorado, Hawaii, Maine, New Jersey, Oregon, Vermont, Washington) and Washington D.C.
Spending on those in the last twelve months accounts for 8.5% of total aggregate medical spending in the United States.
When considering only those aged 65 and older, estimates show that about 27% of Medicare's annual $327 billion budget ($88 billion) in 2006 goes to care for patients in their final year of life. For the over-65s, between 1992 and 1996, spending on those in their last year of life represented 22% of all medical spending, 18% of all non-Medicare spending, and 25 percent of all Medicaid spending for the poor. These percentages appears to be falling over time, as in 2008, 16.8% of all medical spending on the over 65s went on those in their last year of life.
Predicting death is difficult, which has affected estimates of spending in the last year of life; when controlling for spending on patients who were predicted as likely to die, Medicare spending was estimated at 5% of the total.
Belgium
Belgium's first palliative home care team was established in 1987, and the first palliative care unit and hospital care support teams were established in 1991. A strong legal and structural framework for palliative care was established in the 1990s, which divided the country into areas of 30, where palliative care networks were responsible for coordinating palliative services. Home care was provided by palliative support teams, and each hospital and care home recognized to have a palliative support team. In 1999, Belgium ranked second (after the United Kingdom) in the number of palliative care beds per capita. In 2001, there was an active palliative care support team in 72% of hospitals and a specialized nurse or active support team in 50% nursing homes. Government resources for palliative care doubled in 2000, and in 2007 Belgium was ranked third out of 52 countries worldwide in terms of resources for palliative care. (Together with the United Kingdom and Ireland) to raise public awareness under the auspices of EoL 6 According to the Lien Foundation report, Belgium ranks 5th (out of 40 countries worldwide) for the overall level of mortality.
| Biology and health sciences | Medical procedures: General | Health |
11039790 | https://en.wikipedia.org/wiki/Animal | Animal | Animals are multicellular, eukaryotic organisms in the biological kingdom Animalia (). With few exceptions, animals consume organic material, breathe oxygen, have myocytes and are able to move, can reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. Animals form a clade, meaning that they arose from a single common ancestor. Over 1.5 million living animal species have been described, of which around 1.05 million are insects, over 85,000 are molluscs, and around 65,000 are vertebrates. It has been estimated there are as many as 7.77 million animal species on Earth. Animal body lengths range from to . They have complex ecologies and interactions with each other and their environments, forming intricate food webs. The scientific study of animals is known as zoology, and the study of animal behaviour is known as ethology.
The animal kingdom is divided into five infrakingdoms/superphyla, namely Porifera, Ctenophora, Placozoa, Cnidaria and Bilateria. Most living animal species belong to the infrakingdom Bilateria, a highly proliferative clade whose members have a bilaterally symmetric and significantly cephalised body plan, and the vast majority of bilaterians belong to two large superphyla: the protostomes, which includes organisms such as arthropods, molluscs, flatworms, annelids and nematodes; and the deuterostomes, which include echinoderms, hemichordates and chordates, the latter of which contains the vertebrates. The much smaller basal phylum Xenacoelomorpha have an uncertain position within Bilateria.
Animals first appear in the fossil record in the late Cryogenian period and diversified in the subsequent Ediacaran period in what is known as the Avalon explosion. Earlier evidence of animals is still controversial; the sponge-like organism Otavia has been dated back to the Tonian period at the start of the Neoproterozoic, but its identity as an animal is heavily contested. Nearly all modern animal phyla became clearly established in the fossil record as marine species during the Cambrian explosion, which began around 539 million years ago (Mya), and most classes during the Ordovician radiation 485.4 Mya. 6,331 groups of genes common to all living animals have been identified; these may have arisen from a single common ancestor that lived about 650 Mya during the Cryogenian period.
Historically, Aristotle divided animals into those with blood and those without. Carl Linnaeus created the first hierarchical biological classification for animals in 1758 with his Systema Naturae, which Jean-Baptiste Lamarck expanded into 14 phyla by 1809. In 1874, Ernst Haeckel divided the animal kingdom into the multicellular Metazoa (now synonymous with Animalia) and the Protozoa, single-celled organisms no longer considered animals. In modern times, the biological classification of animals relies on advanced techniques, such as molecular phylogenetics, which are effective at demonstrating the evolutionary relationships between taxa.
Humans make use of many other animal species for food (including meat, eggs, and dairy products), for materials (such as leather, fur, and wool), as pets and as working animals for transportation, and services. Dogs, the first domesticated animal, have been used in hunting, in security and in warfare, as have horses, pigeons and birds of prey; while other terrestrial and aquatic animals are hunted for sports, trophies or profits. Non-human animals are also an important cultural element of human evolution, having appeared in cave arts and totems since the earliest times, and are frequently featured in mythology, religion, arts, literature, heraldry, politics, and sports.
Etymology
The word animal comes from the Latin noun of the same meaning, which is itself derived from Latin 'having breath or soul'. The biological definition includes all members of the kingdom Animalia. In colloquial usage, the term animal is often used to refer only to nonhuman animals. The term metazoa is derived from Ancient Greek 'after' (in biology, the prefix meta- stands for 'later') and 'animals', plural of 'animal'.
Characteristics
Animals have several characteristics that set them apart from other living things. Animals are eukaryotic and multicellular. Unlike plants and algae, which produce their own nutrients, animals are heterotrophic, feeding on organic material and digesting it internally. With very few exceptions, animals respire aerobically. All animals are motile (able to spontaneously move their bodies) during at least part of their life cycle, but some animals, such as sponges, corals, mussels, and barnacles, later become sessile. The blastula is a stage in embryonic development that is unique to animals, allowing cells to be differentiated into specialised tissues and organs.
Structure
All animals are composed of cells, surrounded by a characteristic extracellular matrix composed of collagen and elastic glycoproteins. During development, the animal extracellular matrix forms a relatively flexible framework upon which cells can move about and be reorganised, making the formation of complex structures possible. This may be calcified, forming structures such as shells, bones, and spicules. In contrast, the cells of other multicellular organisms (primarily algae, plants, and fungi) are held in place by cell walls, and so develop by progressive growth. Animal cells uniquely possess the cell junctions called tight junctions, gap junctions, and desmosomes.
With few exceptions—in particular, the sponges and placozoans—animal bodies are differentiated into tissues. These include muscles, which enable locomotion, and nerve tissues, which transmit signals and coordinate the body. Typically, there is also an internal digestive chamber with either one opening (in Ctenophora, Cnidaria, and flatworms) or two openings (in most bilaterians).
Reproduction and development
Nearly all animals make use of some form of sexual reproduction. They produce haploid gametes by meiosis; the smaller, motile gametes are spermatozoa and the larger, non-motile gametes are ova. These fuse to form zygotes, which develop via mitosis into a hollow sphere, called a blastula. In sponges, blastula larvae swim to a new location, attach to the seabed, and develop into a new sponge. In most other groups, the blastula undergoes more complicated rearrangement. It first invaginates to form a gastrula with a digestive chamber and two separate germ layers, an external ectoderm and an internal endoderm. In most cases, a third germ layer, the mesoderm, also develops between them. These germ layers then differentiate to form tissues and organs.
Repeated instances of mating with a close relative during sexual reproduction generally leads to inbreeding depression within a population due to the increased prevalence of harmful recessive traits. Animals have evolved numerous mechanisms for avoiding close inbreeding.
Some animals are capable of asexual reproduction, which often results in a genetic clone of the parent. This may take place through fragmentation; budding, such as in Hydra and other cnidarians; or parthenogenesis, where fertile eggs are produced without mating, such as in aphids.
Ecology
Animals are categorised into ecological groups depending on their trophic levels and how they consume organic material. Such groupings include carnivores (further divided into subcategories such as piscivores, insectivores, ovivores, etc.), herbivores (subcategorised into folivores, graminivores, frugivores, granivores, nectarivores, algivores, etc.), omnivores, fungivores, scavengers/detritivores, and parasites. Interactions between animals of each biome form complex food webs within that ecosystem. In carnivorous or omnivorous species, predation is a consumer–resource interaction where the predator feeds on another organism, its prey, who often evolves anti-predator adaptations to avoid being fed upon. Selective pressures imposed on one another lead to an evolutionary arms race between predator and prey, resulting in various antagonistic/competitive coevolutions. Almost all multicellular predators are animals. Some consumers use multiple methods; for example, in parasitoid wasps, the larvae feed on the hosts' living tissues, killing them in the process, but the adults primarily consume nectar from flowers. Other animals may have very specific feeding behaviours, such as hawksbill sea turtles which mainly eat sponges.
Most animals rely on biomass and bioenergy produced by plants and phytoplanktons (collectively called producers) through photosynthesis. Herbivores, as primary consumers, eat the plant material directly to digest and absorb the nutrients, while carnivores and other animals on higher trophic levels indirectly acquire the nutrients by eating the herbivores or other animals that have eaten the herbivores. Animals oxidise carbohydrates, lipids, proteins and other biomolecules, which allows the animal to grow and to sustain basal metabolism and fuel other biological processes such as locomotion. Some benthic animals living close to hydrothermal vents and cold seeps on the dark sea floor consume organic matter produced through chemosynthesis (via oxidising inorganic compounds such as hydrogen sulfide) by archaea and bacteria.
Animals evolved in the sea. Lineages of arthropods colonised land around the same time as land plants, probably between 510 and 471 million years ago during the Late Cambrian or Early Ordovician. Vertebrates such as the lobe-finned fish Tiktaalik started to move on to land in the late Devonian, about 375 million years ago. Animals occupy virtually all of earth's habitats and microhabitats, with faunas adapted to salt water, hydrothermal vents, fresh water, hot springs, swamps, forests, pastures, deserts, air, and the interiors of other organisms. Animals are however not particularly heat tolerant; very few of them can survive at constant temperatures above or in the most extreme cold deserts of continental Antarctica.
Diversity
Size
The blue whale (Balaenoptera musculus) is the largest animal that has ever lived, weighing up to 190 tonnes and measuring up to long. The largest extant terrestrial animal is the African bush elephant (Loxodonta africana), weighing up to 12.25 tonnes and measuring up to long. The largest terrestrial animals that ever lived were titanosaur sauropod dinosaurs such as Argentinosaurus, which may have weighed as much as 73 tonnes, and Supersaurus which may have reached 39 metres. Several animals are microscopic; some Myxozoa (obligate parasites within the Cnidaria) never grow larger than 20 μm, and one of the smallest species (Myxobolus shekel) is no more than 8.5 μm when fully grown.
Numbers and habitats of major phyla
The following table lists estimated numbers of described extant species for the major animal phyla, along with their principal habitats (terrestrial, fresh water, and marine), and free-living or parasitic ways of life. Species estimates shown here are based on numbers described scientifically; much larger estimates have been calculated based on various means of prediction, and these can vary wildly. For instance, around 25,000–27,000 species of nematodes have been described, while published estimates of the total number of nematode species include 10,000–20,000; 500,000; 10 million; and 100 million. Using patterns within the taxonomic hierarchy, the total number of animal species—including those not yet described—was calculated to be about 7.77 million in 2011.
Evolutionary origin
Evidence of animals is found as long ago as the Cryogenian period. 24-Isopropylcholestane (24-ipc) has been found in rocks from roughly 650 million years ago; it is only produced by sponges and pelagophyte algae. Its likely origin is from sponges based on molecular clock estimates for the origin of 24-ipc production in both groups. Analyses of pelagophyte algae consistently recover a Phanerozoic origin, while analyses of sponges recover a Neoproterozoic origin, consistent with the appearance of 24-ipc in the fossil record.
The first body fossils of animals appear in the Ediacaran, represented by forms such as Charnia and Spriggina. It had long been doubted whether these fossils truly represented animals, but the discovery of the animal lipid cholesterol in fossils of Dickinsonia establishes their nature. Animals are thought to have originated under low-oxygen conditions, suggesting that they were capable of living entirely by anaerobic respiration, but as they became specialised for aerobic metabolism they became fully dependent on oxygen in their environments.
Many animal phyla first appear in the fossil record during the Cambrian explosion, starting about 539 million years ago, in beds such as the Burgess shale. Extant phyla in these rocks include molluscs, brachiopods, onychophorans, tardigrades, arthropods, echinoderms and hemichordates, along with numerous now-extinct forms such as the predatory Anomalocaris. The apparent suddenness of the event may however be an artefact of the fossil record, rather than showing that all these animals appeared simultaneously. That view is supported by the discovery of Auroralumina attenboroughii, the earliest known Ediacaran crown-group cnidarian (557–562 mya, some 20 million years before the Cambrian explosion) from Charnwood Forest, England. It is thought to be one of the earliest predators, catching small prey with its nematocysts as modern cnidarians do.
Some palaeontologists have suggested that animals appeared much earlier than the Cambrian explosion, possibly as early as 1 billion years ago. Early fossils that might represent animals appear for example in the 665-million-year-old rocks of the Trezona Formation of South Australia. These fossils are interpreted as most probably being early sponges.
Trace fossils such as tracks and burrows found in the Tonian period (from 1 gya) may indicate the presence of triploblastic worm-like animals, roughly as large (about 5 mm wide) and complex as earthworms. However, similar tracks are produced by the giant single-celled protist Gromia sphaerica, so the Tonian trace fossils may not indicate early animal evolution. Around the same time, the layered mats of microorganisms called stromatolites decreased in diversity, perhaps due to grazing by newly evolved animals. Objects such as sediment-filled tubes that resemble trace fossils of the burrows of wormlike animals have been found in 1.2 gya rocks in North America, in 1.5 gya rocks in Australia and North America, and in 1.7 gya rocks in Australia. Their interpretation as having an animal origin is disputed, as they might be water-escape or other structures.
Phylogeny
External phylogeny
Animals are monophyletic, meaning they are derived from a common ancestor. Animals are the sister group to the choanoflagellates, with which they form the Choanozoa.
Ros-Rocher and colleagues (2021) trace the origins of animals to unicellular ancestors, providing the external phylogeny shown in the cladogram. Uncertainty of relationships is indicated with dashed lines. The animal clade had certainly originated by 650 mya, and may have come into being as much as 800 mya, based on molecular clock evidence for different phyla.
Internal phylogeny
The relationships at the base of the animal tree have been debated. Other than Ctenophora, the Bilateria and Cnidaria are the only groups with symmetry, and other evidence shows they are closely related. In addition to sponges, Placozoa has no symmetry and was often considered a "missing link" between protists and multicellular animals. The presence of hox genes in Placozoa shows that they were once more complex.
The Porifera (sponges) have long been assumed to be sister to the rest of the animals, but there is evidence that the Ctenophora may be in that position. Molecular phylogenetics has supported both the sponge-sister and ctenophore-sister hypotheses. In 2017, Roberto Feuda and colleagues, using amino acid differences, presented both, with the following cladogram for the sponge-sister view that they supported (their ctenophore-sister tree simply interchanging the places of ctenophores and sponges):
Conversely, a 2023 study by Darrin Schultz and colleagues uses ancient gene linkages to construct the following ctenophore-sister phylogeny:
Non-bilaterians
Sponges are physically very distinct from other animals, and were long thought to have diverged first, representing the oldest animal phylum and forming a sister clade to all other animals. Despite their morphological dissimilarity with all other animals, genetic evidence suggests sponges may be more closely related to other animals than the comb jellies are. Sponges lack the complex organisation found in most other animal phyla; their cells are differentiated, but in most cases not organised into distinct tissues, unlike all other animals. They typically feed by drawing in water through pores, filtering out small particles of food.
The Ctenophora and Cnidaria are radially symmetric and have digestive chambers with a single opening, which serves as both mouth and anus. Animals in both phyla have distinct tissues, but these are not organised into discrete organs. They are diploblastic, having only two main germ layers, ectoderm and endoderm.
The tiny placozoans have no permanent digestive chamber and no symmetry; they superficially resemble amoebae. Their phylogeny is poorly defined, and under active research.
Bilateria
The remaining animals, the great majority—comprising some 29 phyla and over a million species—form the Bilateria clade, which have a bilaterally symmetric body plan. The Bilateria are triploblastic, with three well-developed germ layers, and their tissues form distinct organs. The digestive chamber has two openings, a mouth and an anus, and in the Nephrozoa there is an internal body cavity, a coelom or pseudocoelom. These animals have a head end (anterior) and a tail end (posterior), a back (dorsal) surface and a belly (ventral) surface, and a left and a right side. A modern consensus phylogenetic tree for the Bilateria is shown below.
Having a front end means that this part of the body encounters stimuli, such as food, favouring cephalisation, the development of a head with sense organs and a mouth. Many bilaterians have a combination of circular muscles that constrict the body, making it longer, and an opposing set of longitudinal muscles, that shorten the body; these enable soft-bodied animals with a hydrostatic skeleton to move by peristalsis. They also have a gut that extends through the basically cylindrical body from mouth to anus. Many bilaterian phyla have primary larvae which swim with cilia and have an apical organ containing sensory cells. However, over evolutionary time, descendant spaces have evolved which have lost one or more of each of these characteristics. For example, adult echinoderms are radially symmetric (unlike their larvae), while some parasitic worms have extremely simplified body structures.
Genetic studies have considerably changed zoologists' understanding of the relationships within the Bilateria. Most appear to belong to two major lineages, the protostomes and the deuterostomes. It is often suggested that the basalmost bilaterians are the Xenacoelomorpha, with all other bilaterians belonging to the subclade Nephrozoa. However, this suggestion has been contested, with other studies finding that xenacoelomorphs are more closely related to Ambulacraria than to other bilaterians.
Protostomes and deuterostomes
Protostomes and deuterostomes differ in several ways. Early in development, deuterostome embryos undergo radial cleavage during cell division, while many protostomes (the Spiralia) undergo spiral cleavage.
Animals from both groups possess a complete digestive tract, but in protostomes the first opening of the embryonic gut develops into the mouth, and the anus forms secondarily. In deuterostomes, the anus forms first while the mouth develops secondarily. Most protostomes have schizocoelous development, where cells simply fill in the interior of the gastrula to form the mesoderm. In deuterostomes, the mesoderm forms by enterocoelic pouching, through invagination of the endoderm.
The main deuterostome phyla are the Ambulacraria and the Chordata. Ambulacraria are exclusively marine and include acorn worms, starfish, sea urchins, and sea cucumbers. The chordates are dominated by the vertebrates (animals with backbones), which consist of fishes, amphibians, reptiles, birds, and mammals.
The protostomes include the Ecdysozoa, named after their shared trait of ecdysis, growth by moulting, Among the largest ecdysozoan phyla are the arthropods and the nematodes. The rest of the protostomes are in the Spiralia, named for their pattern of developing by spiral cleavage in the early embryo. Major spiralian phyla include the annelids and molluscs.
History of classification
In the classical era, Aristotle divided animals, based on his own observations, into those with blood (roughly, the vertebrates) and those without. The animals were then arranged on a scale from man (with blood, two legs, rational soul) down through the live-bearing tetrapods (with blood, four legs, sensitive soul) and other groups such as crustaceans (no blood, many legs, sensitive soul) down to spontaneously generating creatures like sponges (no blood, no legs, vegetable soul). Aristotle was uncertain whether sponges were animals, which in his system ought to have sensation, appetite, and locomotion, or plants, which did not: he knew that sponges could sense touch and would contract if about to be pulled off their rocks, but that they were rooted like plants and never moved about.
In 1758, Carl Linnaeus created the first hierarchical classification in his Systema Naturae. In his original scheme, the animals were one of three kingdoms, divided into the classes of Vermes, Insecta, Pisces, Amphibia, Aves, and Mammalia. Since then, the last four have all been subsumed into a single phylum, the Chordata, while his Insecta (which included the crustaceans and arachnids) and Vermes have been renamed or broken up. The process was begun in 1793 by Jean-Baptiste de Lamarck, who called the Vermes ('a chaotic mess') and split the group into three new phyla: worms, echinoderms, and polyps (which contained corals and jellyfish). By 1809, in his Philosophie Zoologique, Lamarck had created nine phyla apart from vertebrates (where he still had four phyla: mammals, birds, reptiles, and fish) and molluscs, namely cirripedes, annelids, crustaceans, arachnids, insects, worms, radiates, polyps, and infusorians.
In his 1817 , Georges Cuvier used comparative anatomy to group the animals into four ('branches' with different body plans, roughly corresponding to phyla), namely vertebrates, molluscs, articulated animals (arthropods and annelids), and zoophytes (radiata) (echinoderms, cnidaria and other forms). This division into four was followed by the embryologist Karl Ernst von Baer in 1828, the zoologist Louis Agassiz in 1857, and the comparative anatomist Richard Owen in 1860.
In 1874, Ernst Haeckel divided the animal kingdom into two subkingdoms: Metazoa (multicellular animals, with five phyla: coelenterates, echinoderms, articulates, molluscs, and vertebrates) and Protozoa (single-celled animals), including a sixth animal phylum, sponges. The protozoa were later moved to the former kingdom Protista, leaving only the Metazoa as a synonym of Animalia.
In human culture
Practical uses
The human population exploits a large number of other animal species for food, both of domesticated livestock species in animal husbandry and, mainly at sea, by hunting wild species. Marine fish of many species are caught commercially for food. A smaller number of species are farmed commercially. Humans and their livestock make up more than 90% of the biomass of all terrestrial vertebrates, and almost as much as all insects combined.
Invertebrates including cephalopods, crustaceans, insects—principally bees and silkworms—and bivalve or gastropod molluscs are hunted or farmed for food, fibres. Chickens, cattle, sheep, pigs, and other animals are raised as livestock for meat across the world. Animal fibres such as wool and silk are used to make textiles, while animal sinews have been used as lashings and bindings, and leather is widely used to make shoes and other items. Animals have been hunted and farmed for their fur to make items such as coats and hats. Dyestuffs including carmine (cochineal), shellac, and kermes have been made from the bodies of insects. Working animals including cattle and horses have been used for work and transport from the first days of agriculture.
Animals such as the fruit fly Drosophila melanogaster serve a major role in science as experimental models. Animals have been used to create vaccines since their discovery in the 18th century. Some medicines such as the cancer drug trabectedin are based on toxins or other molecules of animal origin.
People have used hunting dogs to help chase down and retrieve animals, and birds of prey to catch birds and mammals, while tethered cormorants have been used to catch fish. Poison dart frogs have been used to poison the tips of blowpipe darts.
A wide variety of animals are kept as pets, from invertebrates such as tarantulas, octopuses, and praying mantises, reptiles such as snakes and chameleons, and birds including canaries, parakeets, and parrots all finding a place. However, the most kept pet species are mammals, namely dogs, cats, and rabbits. There is a tension between the role of animals as companions to humans, and their existence as individuals with rights of their own.
A wide variety of terrestrial and aquatic animals are hunted for sport.
Symbolic uses
The signs of the Western and Chinese zodiacs are based on animals. In China and Japan, the butterfly has been seen as the personification of a person's soul, and in classical representation the butterfly is also the symbol of the soul.
Animals have been the subjects of art from the earliest times, both historical, as in ancient Egypt, and prehistoric, as in the cave paintings at Lascaux. Major animal paintings include Albrecht Dürer's 1515 The Rhinoceros, and George Stubbs's horse portrait Whistlejacket. Insects, birds and mammals play roles in literature and film, such as in giant bug movies.
Animals including insects and mammals feature in mythology and religion. The scarab beetle was sacred in ancient Egypt, and the cow is sacred in Hinduism. Among other mammals, deer, horses, lions, bats, bears, and wolves are the subjects of myths and worship.
| Biology and health sciences | Science and medicine | null |
3103149 | https://en.wikipedia.org/wiki/Eucestoda | Eucestoda | Eucestoda, commonly referred to as tapeworms, is the larger of the two subclasses of flatworms in the class Cestoda (the other subclass being Cestodaria). Larvae have six posterior hooks on the scolex (head), in contrast to the ten-hooked Cestodaria. All tapeworms are endoparasites of vertebrates, living in the digestive tract or related ducts. Examples are the pork tapeworm (Taenia solium) with a human definitive host, and pigs as the secondary host, and Moniezia expansa, the definitive hosts of which are ruminants.
Body structure
Adult Eucestoda have a white-opaque dorso-ventrally flattened appearance, and are elongated, ranging in length from a few millimeters (about ¼") to 25 meters (80'). Almost all members, except members of the orders Caryophyllidea and Spathebothriidea, are polyzoic with repeated sets of reproductive organs down the body length, and almost all members, except members of the order Dioecocestidae, are protandral hermaphrodites. Most except caryophyllideans consist of a few to 4000 proglottids (segments) that show a characteristic body differentiation pattern into scolex (head), neck, and strobila.
The scolex, located at the anterior end, is a small (usually less than 1 mm) holdfast organ with specific systems for fastening itself to materials: rostrum, acetabula, suckers, bothria, grooves, and hooks. The small neck region, directly behind the scolex, consists of an undifferentiated tissue region of proglottid proliferation, leading into a zone of increasing and continuous proglottid differentiation. As such, the main and largest section of the body, the strobila, consists of a chain of increasingly mature proglottids. These cytological processes are not well understood at present.
Members of the Eucestoda have no mouth or digestive tract, and instead absorb nutrients through a layer of microtriches over the tegument at the shared body wall surface. In addition to the body wall, several other systems are common to the whole length of the tapeworm, including excretory canals, nerve fibers, and longitudinal muscles. The excretory system is responsible for osmoregulation and consists of blind-ending flame bulbs communicating through a duct system. The nervous system, often referred to as a "ladder system," is a system of longitudinal connectives and transverse ring commissures.
Reproduction
The reproductive systems develop progressively along the differentiated proglottids of the strobila region, with each proglottid developing one or two sets of sexual organs that differentiate at different times in a species-specific pattern, usually male-first. Thus, moving in the posterior direction of the continuously maturing proglottid chain, there are proglottids with mature male reproductive organs, then proglottids with mature female reproductive organs, and then proglottids with fertilized eggs in the uterus, a condition commonly referred to as "gravid."
Proglottids
An atrium on the lateral margin of each proglottid contains the openings to both the male and female genital ducts. Follicular testes produce sperm, which are carried by a system of ducts to the cirrus, an eversible copulatory organ that usually has a hypodermic system of spines and a holdfast system of hooks. The main specialized female reproductive organs are an ovary that produces eggs and a vitellarium that produces yolk cells. Yolk cells travel in a duct system to the oviduct, where, in a modified region, the ovum is enclosed in a shell with yolk cells. After the gonads and their ducts have finished maturing, the female reproductive organs begin to mature. The oviduct develops a vagina and enlarges into the uterus, where fertilization and embryonic development occur.
Egg formation is a result of copulation. A proglottid can copulate with itself, with other proglottids in the same worm, or with proglottids in other worms, and hypodermic fertilization sometimes occurs. When a gravid proglottid that is distended with an embryo reaches the end of the strobila, it detaches and passes out of the host intact with feces, with or without some tissue degeneration. In the order Pseudophyllidea, the uterus has a pore and the proglottid sheds the shelled embryo, only becoming detached when exhausted.
Some members of the Eucestoda (such as Echinococcus, Sparganum, Taenia multiceps sp., and Mesocestoides sp.) can reproduce asexually through budding, which initiates a metagenesis of alternating sexually and asexually reproducing generations.
Life stages
A tapeworm can live from a few days to over 20 years. Eucestoda ontogenesis continues through metamorphosing in different larval stages inside different hosts. The initial six-hooked embryo, known as an
oncosphere or hexacanth, forms through cleavage. In the order Pseudophyllidea, it remains enclosed in a ciliated embryophore. The embryo continues to develop in other host species, with two intermediate hosts generally needed. It gains entry to its first intermediate host by being eaten.
Except for members of the order Taeniidae, the first intermediate host is an arthropod, and except for in the case of Archigetes spp. (which can attain sexual maturity in freshwater oligochaeta), the second host is usually a fish, but can be another invertebrate or vertebrate. After the scolex has differentiated and matured in the larval stage, growth will stop until a vertebrate eats the intermediate host, and then the strobila develops. Adult tapeworms often have a high final host specificity, with some species only found in one host vertebrate.
Common infective species
Medical importance
Taeniasis
Taeniasis is an infection within the intestines by adult tapeworms belonging to the genus Taenia. It is due to eating contaminated undercooked beef or pork. There are generally no or only mild symptoms. Symptoms may occasionally include weight loss or abdominal pain. Segments of tapeworm may be seen in the stool.
Cysticercosis
Cysticercosis is a tissue infection caused by the young form of the pork tapeworm. Infection occurs through swallowing or antiperistaltic contractions during regurgitation carrying eggs or gravid proglottids to the stomach. At this point, larvae hatch when exposed to enzymes and penetrate the intestinal wall, travelling through the body through blood vessels to tissues like the brain, the eye, muscles, and the nervous system (called neurocysticercosis).
At these sites, the parasites lodge and form cysts, a condition called cysticercosis, producing inflammatory reactions and clinical issues when they die, sometimes causing serious or fatal damage. In the eye, the parasites can cause visual loss, and infection of the spine and adjacent leptomeninges can cause paresthesias, pain, or paralysis.
Echinococcosis (hydatid disease)
Humans become accidental hosts to worms of the genus Echinococcus, playing no role in the worm's biological cycle. This can result in echinococcosis, also called hydatid disease. Humans (usually children) become infected by direct contact with dogs and eating food contaminated with dog feces. Common sites of infection are the liver, the lungs, muscles, bones, kidneys, and the spleen.
Eggs hatch in the gastrointestinal tract after the consumption of contaminated food, after which the larvae travel to the liver through portal circulation. Here, the larvae are trapped and usually develop into hydatid cysts. While the liver is the first filter for trapping them, the lungs act as the second filter site, trapping most of the larvae that are not trapped by the liver. Some larvae escape from the lungs to cause cysts in other tissues.
When a larva becomes established in tissue, it develops into a "bladderworm" or "hydatid" and can cause various cancer-like cysts that may rupture and interact with nearby organs. Most cases are asymptomatic, and the mortality rate is low, but various complications from these interactions may lead to debilitating illness.
Hymenolepiasis
Arthropods are intermediate hosts of Hymenolepis nana, otherwise known as the "dwarf tapeworm," while humans are used as final hosts. Humans become infected and develop hymenolepiasis through eating infected arthropods, ingesting eggs in water inhabited by arthropods, or from dirty hands. This is a common and widespread intestinal worm.
While light infections are usually asymptomatic, autoinfection through eating the eggs of worms in the intestines is possible, and it can lead to hyperinfection. Humans can also become hyperinfected through ingesting grain products contaminated by infected insects. Infections involving more than two thousand worms can cause many different gastrointestinal symptoms and allergic responses. Common symptoms include chronic urticaria, skin eruption, and phlyctenular keratoconjunctivitis.
Diphyllobothriasis
Diphyllobothriasis is caused by infection with Diphyllobothrium latum (also known as the "broad tapeworm" or "fish tapeworm") and related species. Humans become infected by eating raw, undercooked, or marinated fish acting as a second intermediate or paratenic host harboring metacestodes or plerocercoid larvae.
Clinical symptoms are due to the large size of the tapeworm, which often reaches a length exceeding . The most common symptom is pernicious anemia, caused by the absorption of vitamin B12 by the worm. Other symptoms include various intestinal issues, slight leukocytosis, and eosinophilia.
Sparganosis
Sparganosis is caused by the plerocercoid larvae of the tapeworm Spirometra. Humans become infected by drinking contaminated water, eating raw or poorly cooked infected flesh, or from using poultices of raw infected flesh (usually raw pork or snake) on skin or mucous membranes.
The most common symptom is a painful, slowly growing nodule in the subcutaneous tissues, which may migrate. Infection in the eye area can cause pain, irritation, edema, and excess watering. When the orbital tissues become infected, the swelling can cause blindness. An infected bowel may become perforated. Brain infection can cause granulomas, hematomas, and abscesses.
Subdivisions
The evolutionary history of the Eucestoda has been studied using ribosomal RNA, mitochondrial and other DNA, and morphological analysis and continues to be revised. "Tetraphyllidea" is seen to be paraphyletic; "Pseudophyllidea" has been broken up into two orders, Bothriocephalidea and Diphyllobothriidea. Hosts, whose phylogeny often mirrors that of the parasites (Fahrenholz's rule), are indicated in italics and parentheses, the life-cycle sequence (where known) shown by arrows as (intermediate host1 [→ intermediate host2 ] → definitive host). Alternatives, generally for different species within an order, are shown in square brackets.
| Biology and health sciences | Platyzoa | Animals |
3103702 | https://en.wikipedia.org/wiki/Taeniodonta | Taeniodonta | Taeniodonta ("banded teeth") is an extinct order of eutherian mammals, that lived in North America and Europe from the late Cretaceous (Maastrichtian) to the middle Eocene.
Taeniodonts evolved quickly into highly specialized digging animals, and varied greatly in size, from rat-sized to species as large as a bear. Later species developed prominent front teeth and huge claws for digging and rooting. Some genera, like Stylinodon, had ever-growing teeth. The scarcity of taeniodont fossils can be explained by the fact that these animals probably lived in dry or arid climates unconductive to fossilization.
According to 2022 studies of Bertrand, O. C. and Sarah L. Shelley, taeniodonts are identified to be a basal placental mammal. Genera Alveugena, Ambilestes and Procerberus are the immediate outgroups to Taeniodonta, with genus Alveugena classified as a sister taxon to this order.
Taxonomy and phylogeny
Taxonomy
From Thomas E. Williamson and Stephen L. Brusatte (2013):
Order: †Taeniodonta
Genus: †Schowalteria
†Schowalteria clemensi
Family: †Conoryctidae
Subfamily: †Conoryctinae
Tribe: †Conoryctellini
Genus: †Conoryctella
†Conoryctella dragonensis
†Conoryctella pattersoni
Tribe: †Conoryctini
Genus: †Conoryctes
†Conoryctes comma
Genus: †Huerfanodon(paraphyletic genus)
†Huerfanodon heilprinianus
†Huerfanodon polecatensis
†Huerfanodon torrejonius
†Huerfanodon sp. [USNM 9678]
Subfamily: †Eurodontinae
Genus: †Eurodon
†Eurodon silveirinhensis
Family: †Onychodectidae
Genus: †Onychodectes
†Onychodectes tisonensis
†Onychodectes tisonensis rarus
†Onychodectes tisonensis tisonensis
Superfamily: †Stylinodontoidea
Family: †Stylinodontidae
Subfamily: †Stylinodontinae
Tribe: †Ectoganini
Genus: †Ectoganus
†Ectoganus bighornensis
†Ectoganus copei
†Ectoganus gliriformis
†Ectoganus lobdelli
Tribe: †Psittacotheriini
Genus: †Psittacotherium
†Psittacotherium multifragum
Tribe: †Stylinodontini
Genus: †Stylinodon
†Stylinodon mirus
Subfamily: †Wortmaniinae
Genus: †Wortmania
†Wortmania otariidens
†Wortmania sp. [Garfield County, Montana]
Phylogeny
| Biology and health sciences | Mammals: General | Animals |
1602490 | https://en.wikipedia.org/wiki/Extensive-form%20game | Extensive-form game | In game theory, an extensive-form game is a specification of a game allowing (as the name suggests) for the explicit representation of a number of key aspects, like the sequencing of players' possible moves, their choices at every decision point, the (possibly imperfect) information each player has about the other player's moves when they make a decision, and their payoffs for all possible game outcomes. Extensive-form games also allow for the representation of incomplete information in the form of chance events modeled as "moves by nature". Extensive-form representations differ from normal-form in that they provide a more complete description of the game in question, whereas normal-form simply boils down the game into a payoff matrix.
Finite extensive-form games
Some authors, particularly in introductory textbooks, initially define the extensive-form game as being just a game tree with payoffs (no imperfect or incomplete information), and add the other elements in subsequent chapters as refinements. Whereas the rest of this article follows this gentle approach with motivating examples, we present upfront the finite extensive-form games as (ultimately) constructed here. This general definition was introduced by Harold W. Kuhn in 1953, who extended an earlier definition of von Neumann from 1928. Following the presentation from , an n-player extensive-form game thus consists of the following:
A finite set of n (rational) players
A rooted tree, called the game tree
Each terminal (leaf) node of the game tree has an n-tuple of payoffs, meaning there is one payoff for each player at the end of every possible play
A partition of the non-terminal nodes of the game tree in n+1 subsets, one for each (rational) player, and with a special subset for a fictitious player called Chance (or Nature). Each player's subset of nodes is referred to as the "nodes of the player". (A game of complete information thus has an empty set of Chance nodes.)
Each node of the Chance player has a probability distribution over its outgoing edges.
Each set of nodes of a rational player is further partitioned in information sets, which make certain choices indistinguishable for the player when making a move, in the sense that:
there is a one-to-one correspondence between outgoing edges of any two nodes of the same information set—thus the set of all outgoing edges of an information set is partitioned in equivalence classes, each class representing a possible choice for a player's move at some point—, and
every (directed) path in the tree from the root to a terminal node can cross each information set at most once
the complete description of the game specified by the above parameters is common knowledge among the players
A play is thus a path through the tree from the root to a terminal node. At any given non-terminal node belonging to Chance, an outgoing branch is chosen according to the probability distribution. At any rational player's node, the player must choose one of the equivalence classes for the edges, which determines precisely one outgoing edge except (in general) the player doesn't know which one is being followed. (An outside observer knowing every other player's choices up to that point, and the realization of Nature's moves, can determine the edge precisely.) A pure strategy for a player thus consists of a selection—choosing precisely one class of outgoing edges for every information set (of his). In a game of perfect information, the information sets are singletons. It's less evident how payoffs should be interpreted in games with Chance nodes. It is assumed that each player has a von Neumann–Morgenstern utility function defined for every game outcome; this assumption entails that every rational player will evaluate an a priori random outcome by its expected utility.
The above presentation, while precisely defining the mathematical structure over which the game is played, elides however the more technical discussion of formalizing statements about how the game is played like "a player cannot distinguish between nodes in the same information set when making a decision". These can be made precise using epistemic modal logic; see for details.
A perfect information two-player game over a game tree (as defined in combinatorial game theory and artificial intelligence) can be represented as an extensive form game with outcomes (i.e. win, lose, or draw). Examples of such games include tic-tac-toe, chess, and infinite chess. A game over an expectminimax tree, like that of backgammon, has no imperfect information (all information sets are singletons) but has moves of chance. For example, poker has both moves of chance (the cards being dealt) and imperfect information (the cards secretly held by other players).
Perfect and complete information
A complete extensive-form representation specifies:
the players of a game
for every player every opportunity they have to move
what each player can do at each of their moves
what each player knows for every move
the payoffs received by every player for every possible combination of moves
The game on the right has two players: 1 and 2. The numbers by every non-terminal node indicate to which player that decision node belongs. The numbers by every terminal node represent the payoffs to the players (e.g. 2,1 represents a payoff of 2 to player 1 and a payoff of 1 to player 2). The labels by every edge of the graph are the name of the action that edge represents.
The initial node belongs to player 1, indicating that player 1 moves first. Play according to the tree is as follows: player 1 chooses between U and D; player 2 observes player 1's choice and then chooses between U' and D' . The payoffs are as specified in the tree. There are four outcomes represented by the four terminal nodes of the tree: (U,U'), (U,D'), (D,U') and (D,D'). The payoffs associated with each outcome respectively are as follows (0,0), (2,1), (1,2) and (3,1).
If player 1 plays D, player 2 will play U' to maximise their payoff and so player 1 will only receive 1. However, if player 1 plays U, player 2 maximises their payoff by playing D' and player 1 receives 2. Player 1 prefers 2 to 1 and so will play U and player 2 will play D' . This is the subgame perfect equilibrium.
Imperfect information
An advantage of representing the game in this way is that it is clear what the order of play is. The tree shows clearly that player 1 moves first and player 2 observes this move. However, in some games play does not occur like this. One player does not always observe the choice of another (for example, moves may be simultaneous or a move may be hidden). An information set is a set of decision nodes such that:
Every node in the set belongs to one player.
When the game reaches the information set, the player who is about to move cannot differentiate between nodes within the information set; i.e. if the information set contains more than one node, the player to whom that set belongs does not know which node in the set has been reached.
In extensive form, an information set is indicated by a dotted line connecting all nodes in that set or sometimes by a loop drawn around all the nodes in that set.
If a game has an information set with more than one member that game is said to have imperfect information. A game with perfect information is such that at any stage of the game, every player knows exactly what has taken place earlier in the game; i.e. every information set is a singleton set. Any game without perfect information has imperfect information.
The game on the right is the same as the above game except that player 2 does not know what player 1 does when they come to play. The first game described has perfect information; the game on the right does not. If both players are rational and both know that both players are rational and everything that is known by any player is known to be known by every player (i.e. player 1 knows player 2 knows that player 1 is rational and player 2 knows this, etc. ad infinitum), play in the first game will be as follows: player 1 knows that if they play U, player 2 will play D' (because for player 2 a payoff of 1 is preferable to a payoff of 0) and so player 1 will receive 2. However, if player 1 plays D, player 2 will play U' (because to player 2 a payoff of 2 is better than a payoff of 1) and player 1 will receive 1. Hence, in the first game, the equilibrium will be (U, D' ) because player 1 prefers to receive 2 to 1 and so will play U and so player 2 will play D' .
In the second game it is less clear: player 2 cannot observe player 1's move. Player 1 would like to fool player 2 into thinking they have played U when they have actually played D so that player 2 will play D' and player 1 will receive 3. In fact in the second game there is a perfect Bayesian equilibrium where player 1 plays D and player 2 plays U' and player 2 holds the belief that player 1 will definitely play D. In this equilibrium, every strategy is rational given the beliefs held and every belief is consistent with the strategies played. Notice how the imperfection of information changes the outcome of the game.
To more easily solve this game for the Nash equilibrium, it can be converted to the normal form. Given this is a simultaneous/sequential game, player one and player two each have two strategies.
Player 1's Strategies: {U , D}
Player 2's Strategies: {U’ , D’}
We will have a two by two matrix with a unique payoff for each combination of moves. Using the normal form game, it is now possible to solve the game and identify dominant strategies for both players.
If player 1 plays Up (U), player 2 prefers to play Down (D’) (Payoff 1>0)
If player 1 plays Down (D), player 2 prefers to play Up (U’) (Payoff 2>1)
If player 2 plays Up (U’), player 1 prefers to play Down (D) (Payoff 1>0)
If player 2 plays Down (D’), player 1 prefers to play Down (D) (3>2)
These preferences can be marked within the matrix, and any box where both players have a preference provides a nash equilibrium. This particular game has a single solution of (D,U’) with a payoff of (1,2).
In games with infinite action spaces and imperfect information, non-singleton information sets are represented, if necessary, by inserting a dotted line connecting the (non-nodal) endpoints behind the arc described above or by dashing the arc itself. In the Stackelberg competition described above, if the second player had not observed the first player's move the game would no longer fit the Stackelberg model; it would be Cournot competition.
Incomplete information
It may be the case that a player does not know exactly what the payoffs of the game are or of what type their opponents are. This sort of game has incomplete information. In extensive form it is represented as a game with complete but imperfect information using the so-called Harsanyi transformation. This transformation introduces to the game the notion of nature's choice or God's choice. Consider a game consisting of an employer considering whether to hire a job applicant. The job applicant's ability might be one of two things: high or low. Their ability level is random; they either have low ability with probability 1/3 or high ability with probability 2/3. In this case, it is convenient to model nature as another player of sorts who chooses the applicant's ability according to those probabilities. Nature however does not have any payoffs. Nature's choice is represented in the game tree by a non-filled node. Edges coming from a nature's choice node are labelled with the probability of the event it represents occurring.
The game on the left is one of complete information (all the players and payoffs are known to everyone) but of imperfect information (the employer doesn't know what nature's move was.) The initial node is in the centre and it is not filled, so nature moves first. Nature selects with the same probability the type of player 1 (which in this game is tantamount to selecting the payoffs in the subgame played), either t1 or t2. Player 1 has distinct information sets for these; i.e. player 1 knows what type they are (this need not be the case). However, player 2 does not observe nature's choice. They do not know the type of player 1; however, in this game they do observe player 1's actions; i.e. there is perfect information. Indeed, it is now appropriate to alter the above definition of complete information: at every stage in the game, every player knows what has been played by the other players. In the case of private information, every player knows what has been played by nature. Information sets are represented as before by broken lines.
In this game, if nature selects t1 as player 1's type, the game played will be like the very first game described, except that player 2 does not know it (and the very fact that this cuts through their information sets disqualify it from subgame status). There is one separating perfect Bayesian equilibrium; i.e. an equilibrium in which different types do different things.
If both types play the same action (pooling), an equilibrium cannot be sustained. If both play D, player 2 can only form the belief that they are on either node in the information set with probability 1/2 (because this is the chance of seeing either type). Player 2 maximises their payoff by playing D' . However, if they play D' , type 2 would prefer to play U. This cannot be an equilibrium. If both types play U, player 2 again forms the belief that they are at either node with probability 1/2. In this case player 2 plays D' , but then type 1 prefers to play D.
If type 1 plays U and type 2 plays D, player 2 will play D' whatever action they observe, but then type 1 prefers D. The only equilibrium hence is with type 1 playing D, type 2 playing U and player 2 playing U' if they observe D and randomising if they observe U. Through their actions, player 1 has signalled their type to player 2.
Formal definition
Formally, a finite game in extensive form is a structure
where:
is a finite tree with a set of nodes , a unique initial node , a set of terminal nodes (let be a set of decision nodes) and an immediate predecessor function on which the rules of the game are represented,
is a partition of called an information partition,
is a set of actions available for each information set which forms a partition on the set of all actions .
is an action partition associating each node to a single action , fulfilling:
, the restriction of on is a bijection, with the set of successor nodes of .
is a finite set of players, is (a special player called) nature, and is a player partition of information set . Let be a single player that makes a move at node .
is a family of probabilities of the actions of nature, and
is a payoff profile function.
Infinite action space
It may be that a player has an infinite number of possible actions to choose from at a particular decision node. The device used to represent this is an arc joining two edges protruding from the decision node in question. If the action space is a continuum between two numbers, the lower and upper delimiting numbers are placed at the bottom and top of the arc respectively, usually with a variable that is used to express the payoffs. The infinite number of decision nodes that could result are represented by a single node placed in the centre of the arc. A similar device is used to represent action spaces that, whilst not infinite, are large enough to prove impractical to represent with an edge for each action.
The tree on the left represents such a game, either with infinite action spaces (any real number between 0 and 5000) or with very large action spaces (perhaps any integer between 0 and 5000). This would be specified elsewhere. Here, it will be supposed that it is the former and, for concreteness, it will be supposed it represents two firms engaged in Stackelberg competition. The payoffs to the firms are represented on the left, with and as the strategy they adopt and and as some constants (here marginal costs to each firm). The subgame perfect Nash equilibria of this game can be found by taking the first partial derivative of each payoff function with respect to the follower's (firm 2) strategy variable () and finding its best response function, . The same process can be done for the leader except that in calculating its profit, it knows that firm 2 will play the above response and so this can be substituted into its maximisation problem. It can then solve for by taking the first derivative, yielding . Feeding this into firm 2's best response function, and is the subgame perfect Nash equilibrium.
| Mathematics | Game theory | null |
1602970 | https://en.wikipedia.org/wiki/Information%20set%20%28game%20theory%29 | Information set (game theory) | The information set is the basis for decision making in a game, which includes the actions available to both sides and the benefits of each action. The information set is an important concept in non-perfect games. In game theory, an information set represents all possible points (or decision nodes) in a game that a given player might be at during their turn, based on their current knowledge and observations. These nodes are indistinguishable to the player due to incomplete information about previous actions or the state of the game. Therefore, an information set groups together all decision points where the player, given what they know, cannot tell which specific point they are currently at. For a better idea on decision vertices, refer to Figure 1. If the game has perfect information, every information set contains only one member, namely the point actually reached at that stage of the game, since each player knows the exact mix of chance moves and player strategies up to the current point in the game. Otherwise, it is the case that some players cannot be sure what the game state is; for instance, not knowing what exactly happened in the past or what should be done right now.
Information sets are used in extensive form games and are often depicted in game trees. Game trees show the path from the start of a game and the subsequent paths that can be made depending on each player's next move. For non-perfect information game problems, there is hidden information. That is, each player does not have complete knowledge of the opponent's information, such as cards that do not appear in a poker game. When constructing a game tree, it can be challenging for a player to determine their exact location within the tree solely based on their knowledge and observations. This is because players may lack complete information about the actions or strategies of their opponents. As a result, a player may only be certain that they are at one of several possible nodes. The collection of these indistinguishable nodes at a given point is called the 'information set'. Information sets can be easily depicted in game trees to display each player's possible moves typically using dotted lines, circles or even by just labelling the vertices which shows a particular player's options at the current stage of the game as shown in Figure 1.
More specifically, in the extensive form, an information set is a set of decision nodes such that:
Every node in the set belongs to one player.
When the game reaches the information set, the player with the move cannot differentiate between nodes within the information set, i.e. if an information set contains multiple nodes, the participants associated with that information set are uncertain about which node to select for their move.
Games in extensive form often involve each player being able to play multiple moves which results in the formation of multiple information sets as well. A player is to make choices at each of these vertices based on the options in the information set. This is known as the player's strategy and can provide the player's path from the start of the game, to the end which is also known as the play of the game. From the play of the game, the outcome will always be known based on the strategy of each player unless chance moves are involved, then there will not always be a singular outcome. Not all games play's are strategy based as they can also involve chance moves. When chance moves are involved, a vector of strategies can result in the probability distribution of the multiple outcomes of the games that could occur. Multiple outcomes of games can be created when chance is involved as the moves are likely to be different each time. However, based on the strength of the strategy, some outcomes could have higher probabilities than others.
Assuming that there are multiple information sets in a game, the game transforms from a static game to a dynamic game. The key to solving dynamic game is to calculate each player's information set and make decisions based on their choices at different stages. For example, when player A chooses first, the player B will make the best decision for him based on A's choice. Player A, in turn, can predict B's reaction and make a choice in his favour. The notion of information set was introduced by John von Neumann, motivated by studying the game of Poker.
Example
At the right are two versions of the battle of the sexes game, shown in extensive form. Below, the normal form for both of these games is shown as well.
The first game is simply sequential―when player 2 makes a choice, both parties are already aware of whether player 1 has chosen or .
The second game is also sequential, but the dotted line shows player 2's information set. This is the common way to show that when player 2 moves, he or she is not aware of what player 1 did.
This difference also leads to different predictions for the two games. In the first game, player 1 has the upper hand. They know that they can choose safely because once player 2 knows that player 1 has chosen opera, player 2 would rather go along for and get 2 than choose and get 0. Formally, that's applying subgame perfection to solve the game.
In the second game, player 2 can't observe what player 1 did, so it might as well be a simultaneous game. So subgame perfection doesn't get us anything that Nash equilibrium can't get us, and we have the standard 3 possible equilibria:
Both choose opera
both choose football
or both use a mixed strategy, with player 1 choosing O(pera) 3/5 of the time, and player 2 choosing 2/5 of the time
| Mathematics | Game theory | null |
1604590 | https://en.wikipedia.org/wiki/Clomipramine | Clomipramine | Clomipramine, sold under the brand name Anafranil among others, is a tricyclic antidepressant (TCA). It is used in the treatment of various conditions, most notably obsessive–compulsive disorder but also many other disorders, including hyperacusis, panic disorder, major depressive disorder, trichotillomania, body dysmorphic disorder and chronic pain. It has also been notably used to treat premature ejaculation and the cataplexy associated with narcolepsy.
It may also address certain fundamental features surrounding narcolepsy besides cataplexy (especially hypnagogic and hypnopompic hallucinations). The evidence behind this, however, is less robust. As with other antidepressants (notably including selective serotonin reuptake inhibitors), it may paradoxically increase the risk of suicide in those under the age of 25, at least in the first few weeks of treatment.
It is typically taken by mouth, although intravenous preparations are sometimes used.
Common side effects include dry mouth, constipation, loss of appetite, sleepiness, weight gain, sexual dysfunction, and trouble urinating. Serious side effects include an increased risk of suicidal behavior in those under the age of 25, seizures, mania, and liver problems. If stopped suddenly, a withdrawal syndrome may occur with headaches, sweating, and dizziness. It is unclear if it is safe for use in pregnancy. Its mechanism of action is not entirely clear but is believed to involve increased levels of serotonin and norepinephrine.
Clomipramine was discovered in 1964 by the Swiss drug manufacturer Ciba-Geigy. It is on the World Health Organization's List of Essential Medicines. It is available as a generic medication.
Medical uses
Clomipramine has a number of uses in medicine, including in the treatment of:
Obsessive–compulsive disorder (OCD), which happens to be its only US -labeled indication. Other regulatory agencies (such as the TGA of Australia and the MHRA of the UK) have also approved clomipramine for this indication.
Major depressive disorder (MDD), a popular off-label use in the US. It is approved by the Australian TGA and the United Kingdom MHRA for this indication. In Japan it is also approved for depression. Some have suggested the possible superior efficacy of clomipramine compared to other antidepressants in the treatment of MDD, especially the more severe, although at the current time the evidence may be insufficient to more fully substantiate this claim.
As-with other tricyclic antidepressants, the antidepressant response specifically may be augmented, including in treatment-resistant cases, with triiodothyronine, at a dosage of 50 mcg. In the case of clomipramine specifically, this may also be the case with obsessive-compulsive disorder.
Panic disorder with or without agoraphobia.
Body dysmorphic disorder
Repetitive self-injurious/self-harming behaviours in those with intellectual disability specifically.
The subtype of systemised paranoia characterised by somatic phenomena.
Compulsive nail-biting (onychophagia).
Cataplexy associated with narcolepsy. This is a TGA and MHRA-labeled indication for clomipramine.
Self-bloodletting
Premature ejaculation, where it may be more effective than paroxetine
Depersonalization-derealization disorder
Chronic pain with or without organic disease, particularly headache of the tension type.
Developmental stuttering
Sleep paralysis, with or without narcolepsy
Enuresis (involuntary urinating in sleep) in children. The effect may not be sustained following treatment, and alarm therapy may be more effective in both the short-term and the long-term. Combining a tricyclic (such as clomipramine) with anticholinergic medication may be more effective for treating enuresis than the tricyclic alone.
Trichotillomania
In combination with lithium and tryptophan for severe, particularly treatment-resistant depression. This combination, in a similar vein, has also been used for clomipramine-resistant obsessive-compulsive disorder. When electro-convulsive therapy is performed alongside this treatment-regime (as may be the case in severe depression and accompanied with thyroxine,) however, great care must be taken with lithium. The overall risk of seizures may have to be weighted against the refractory severity of the current illness and necessity of the amalgamation of treatment(s).
Isolated cataplexy unto itself (i.e., in the absence of characteristically associated narcolepsy), admittedly a rare condition
There is anecdotal and tentative, as-yet-undocumented information suggesting that clomipramine may address certain core features of hyperacusis, and possibly misophonia. In the case of hyperacusis, this appears to be especially the case at doses > 150 mg/d. As-with obsessive-compulsive disorder, a full response to any one dose may not be experienced for up to 12–16 weeks.
Although lithium is most-associated with the treatment of bipolar disorder (where it is known for its general mood-stabilising features and to be especially useful in treating and preventing mania), it may have a certain place in the management of treatment-resistant depression (which is often of higher severity than other depressions which have not been addressed with ECT). In these cases is often prescribed alongside SSRIs (e.g., fluoxetine, paroxetine), venlafaxine and various of the tricyclics (e.g., clomipramine, amitriptyline, nortriptyline, maprotiline), which is why it may feature sometimes in the discussion of depression being managed with clomipramine. Lithium also significantly reduces the long-term risk of suicide in general. In any case, it is not necessary to have a diagnosis of bipolar affective disorder (manic-depressive illness), or even to be considered to have subtle elements of it (“soft bipolarity”), to benefit from lithium in the context of treatment with clomipramine.
In a meta-analysis of various trials involving fluoxetine (Prozac), fluvoxamine (Faverin/Luvox), and sertraline (Zoloft) to test their relative efficacies in treating OCD, clomipramine was found to be significantly more effective. Other studies have borne similar results even when risk of bias is eliminated. A potentially significantly greater inherent side-effect profile, however, makes it a second-line choice in the treatment of OCD. SSRIs are generally better-tolerated but appear to be inferior in terms of actual clinical efficacy.
Contraindications
Contraindications include:
Known hypersensitivity to clomipramine, or any of the excipients or cross-sensitivity to tricyclic antidepressants of the dibenzazepine group
Recent myocardial infarction
Any degree of heart block or other cardiac arrhythmias
Mania
Severe liver disease
Narrow angle glaucoma
Untreated urinary retention
It must not be given in combination or within 3 weeks before or after treatment with a monoamine oxidase inhibitor. (Moclobemide-included; however, clomipramine may be initiated sooner at 48 hours following discontinuation of moclobemide.)
Pregnancy and lactation
Clomipramine use during pregnancy is associated with congenital heart defects in the newborn. It is also associated with reversible withdrawal effects in the newborn. Clomipramine is also distributed in breast milk and hence nursing while taking clomipramine is advised against.
Side effects
Clomipramine has been associated with the side effects listed below:
Very common (>10% frequency):
Accommodation defect
Blurred vision
Nausea
Dry mouth (Xerostomia)
Constipation
Fatigue
Dizziness
Tremor
Headache
Myoclonus
Drowsiness
Somnolence
Restlessness
Micturition disorder
Sexual dysfunction (erectile dysfunction and loss of libido)
Hyperhidrosis (profuse sweating)
Common (1–10% frequency):
Weight loss
Dystonia
Cognitive impairment
Orthostatic hypotension
Sinus tachycardia
Clinically irrelevant ECG changes (e.g. T- and ST-wave changes) in patients of normal cardiac status
Palpitations
Tinnitus (hearing ringing in one's ears)
Mydriasis (dilated pupils)
Vomiting
Abdominal disorders
Diarrhoea
Decreased appetite
Increased transaminases
Increased Alkaline phosphatase
Speech disorders
Paraesthesia
Muscle hypertonia
Dysgeusia
Memory impairment
Muscular weakness
Disturbance in attention
Confusional state
Disorientation
Hallucinations (particularly in elderly patients and patients with Parkinson's disease)
Anxiety
Agitation
Sleep disorders
Mania
Hypomania
Aggression
Depersonalization
Insomnia
Nightmares
Aggravation of depression
Delirium
Galactorrhoea (lactation that is not associated with pregnancy or breastfeeding)
Orgasms triggered by yawning
Hot flush
Dermatitis allergic (skin rash, urticaria)
Photosensitivity reaction
Pruritus (itching)
Uncommon (0.1–1% frequency):
Convulsions
Ataxia
Arrhythmias
Elevated blood pressure
Activation of psychotic symptoms
Very rare (<0.01% frequency):
Pancytopaenia — an abnormally low amount of all the different types of blood cells in the blood (including platelets, white blood cells and red blood cells).
Leukopenia — a low white blood cell count.
Agranulocytosis — a more severe form of leukopenia; a dangerously low neutrophil count which leaves one open to life-threatening infections due to the role of the white blood cells in defending the body from invaders.
Thrombocytopenia — an abnormally low amount of platelets in the blood which are essential to clotting and hence this leads to an increased tendency to bruise and bleed, including, potentially, internally.
Eosinophilia — an abnormally high number of eosinophils — the cells that fight off parasitic infections — in the blood.
Syndrome of inappropriate secretion of antidiuretic hormone (SIADH) — a potentially fatal reaction to certain medications that is due to an excessive release of antidiuretic hormone — a hormone that prevents the production of urine by increasing the reabsorption of fluids in the kidney — this results in the development of various electrolyte abnormalities (e.g. hyponatraemia [low blood sodium], hypokalaemia [low blood potassium], hypocalcaemia [low blood calcium]).
Glaucoma
Oedema (local or generalised)
Alopecia (hair loss)
Hyperpyrexia (a high fever that is above 41.5 °C)
Hepatitis (liver swelling) with or without jaundice — the yellowing of the eyes, the skin, and mucous membranes due to impaired liver function.
Abnormal ECG
Anaphylactic and anaphylactoid reactions including hypotension
Neuroleptic malignant syndrome (NMS) — a potentially fatal side effect of antidopaminergic agents such as antipsychotics, tricyclic antidepressants and antiemetics (drugs that relieve nausea and vomiting). NMS develops over a period of days or weeks and is characterised by the following symptoms:
Tremor
Muscle rigidity
Mental status change (such as confusion, delirium, mania, hypomania, agitation, coma, etc.)
Hyperthermia (high body temperature)
Tachycardia (high heart rate)
Blood pressure changes
Diaphoresis (sweating profusely)
Diarrhoea
Alveolitis allergic (pneumonitis) with or without eosinophilia
Purpura
Conduction disorder (e.g. widening of QRS complex, prolonged QT interval, PR/PQ interval changes, bundle-branch block, torsade de pointes, particularly in patients with hypokalaemia)
Individual side-effects may or may not be amendable to treatment.
As noted below, bethanechol may alleviate anti-muscarinic/anti-cholinergic side-effects. It may also treat sexual side-effects common to the likes of clomipramine, SSRIs and phenelzine.
Topiramate has been used to off-set the weight-gain induced from various antidepressants and antipsychotics, and more broadly for general weight-loss (likewise with bupropion). This option may be especially attractive in patients either overweight prior to clomipramine treatment or who have gained an undesirable amount of weight on it.
Another potential advantage of topiramate in the adjunctive treatment of people taking clomipramine is engendered in its status as an anti-convulsant medication, thereby theoretically increasing the seizure-threshold in patients (which clomipramine decreases to an extent which precludes its dosage ranging above 250 m.g./d. in normal circumstances, likewise with maprotiline and its 225 m.g./d. upper-ceiling). It may, thus, be useful and of increased importance in any case for patients with a familial or personal history of epilepsy or seizures of some other kind to concurrently take a daily dose of an anti-convulsant drug (topiramate, gabapentin, etc.) should they require or opt for treatment with an antidepressant which reduces the seizure-threshold significantly (bupropion, clomipramine, amoxapine, maprotiline, venlafaxine).
In the case of seizures occurring due to overdose of tricyclic antidepressants, intravenous lorazepam may successfully terminate them. Phenytoin may or may not prevent them in the first instance but its status as an appropriate acute treatment for these seizures is somewhat controversial.
Tremor may be relieved with a beta-blocker (e.g., pindolol, propranolol, atenolol). In certain cases of tremor, pindolol may be an especially sensible option for serious consideration, as there is substantial evidence that its utilisation is an effective augmentation-strategy for obsessive-compulsive disorder, an important indication for clomipramine.
Withdrawal
Withdrawal symptoms may occur during gradual or particularly abrupt withdrawal of tricyclic antidepressant drugs. Possible symptoms include: nausea, vomiting, abdominal pain, diarrhea, insomnia, headache, nervousness, anxiety, dizziness and worsening of psychiatric status. Differentiating between the return of the original psychiatric disorder and clomipramine withdrawal symptoms is important. Clomipramine withdrawal can be severe. Withdrawal symptoms can also occur in neonates when clomipramine is used during pregnancy. A major mechanism of withdrawal from tricyclic antidepressants is believed to be due to a rebound effect of excessive cholinergic activity due to neuroadaptations as a result of chronic inhibition of cholinergic receptors by tricyclic antidepressants. Restarting the antidepressant and slow tapering is the treatment of choice for tricyclic antidepressant withdrawal. Some withdrawal symptoms may respond to anticholinergics, such as atropine or benztropine mesylate.
Overdose
Clomipramine overdose usually presents with the following symptoms:
Signs of central nervous system depression such as:
stupor
coma
drowsiness
restlessness
ataxia
Mydriasis
Convulsions
Enhanced reflexes
Muscle rigidity
Athetoid and choreoathetoid movements
Serotonin syndrome - a condition with many of the same symptoms as neuroleptic malignant syndrome but has a significantly more rapid onset
Cardiovascular effects including:
arrhythmias (including Torsades de pointes)
tachycardia
QTc interval prolongation
conduction disorders
hypotension
shock
heart failure
cardiac arrest
Apnoea
Cyanosis
Respiratory depression
Vomiting
Fever
Sweating
Oliguria
Anuria
There is no specific antidote for overdose and all treatment is purely supportive and symptomatic. Treatment with activated charcoal may be used to limit absorption in cases of oral overdose. Anyone suspected of overdosing on clomipramine should be hospitalised and kept under close surveillance for at least 72 hours. Clomipramine has been reported as being less toxic in overdose than most other TCAs in one meta-analysis but this may well be due to the circumstances surrounding most overdoses as clomipramine is more frequently used to treat conditions for which the rate of suicide is not particularly high such as OCD. In another meta-analysis, however, clomipramine was associated with a significant degree of toxicity in overdose.
Interactions
Clomipramine may interact with a number of different medications, including the monoamine oxidase inhibitors which include isocarboxazid, moclobemide, phenelzine, selegiline and tranylcypromine, antiarrhythmic agents (due to the effects of TCAs like clomipramine on cardiac conduction. There is also a potential pharmacokinetic interaction with quinidine due to the fact that clomipramine is metabolised by CYP2D6 in vivo), diuretics (due to the potential for hypokalaemia (low blood potassium) to develop which increases the risk for QT interval prolongation and torsades de pointes), the selective serotonin reuptake inhibitors [SSRIs; due to both potential additive serotonergic effects leading to serotonin syndrome and the potential for a pharmacokinetic interaction with the SSRIs that inhibit CYP2D6 (e.g., fluoxetine, paroxetine)] and serotonergic agents such as triptans, other tricyclic antidepressants, tramadol, etc. (due to the potential for serotonin-toxicity | serotonin syndrome). Its use is also advised against in those concurrently on CYP2D6 inhibitors, due to the potential for increased plasma levels of clomipramine and the resulting potential for CNS and cardiotoxicity.
Fluvoxamine increases the serotoninergic effects of clomipramine and, likewise, clomipramine increases fluvoxamine levels.
Pharmacology
Pharmacodynamics
Clomipramine is a reuptake inhibitor of serotonin and norepinephrine, or a serotonin–norepinephrine reuptake inhibitor (SNRI); that is, it blocks the reuptake of these neurotransmitters back into neurons by preventing them from interacting with their transporters, thereby increasing their extracellular concentrations in the synaptic cleft and resulting in increased serotonergic and noradrenergic neurotransmission. In addition, clomipramine also has antiadrenergic, antihistamine, antiserotonergic, antidopaminergic, and anticholinergic activities. It is specifically an antagonist of the α1-adrenergic receptor, the histamine H1 receptor, the serotonin 5-HT2A, 5-HT2C, 5-HT3, 5-HT6, and 5-HT7 receptors, the dopamine D1, D2, and D3 receptors, and the muscarinic acetylcholine receptors (M1–M5). Like other TCAs, clomipramine weakly blocks voltage-dependent sodium channels as well.
Probably all “anticholinergic” side-effects may be successfully reversed in a majority of people with bethanechol chloride, although knowledge of this amenability has unfortunately decreased in medical circles over the decades. It (bethanechol supplementation) arguably should, however, be seriously entertained when tricyclics which often carry significant anti-muscarinic effects (amitriptyline, protriptyline, imipramine, clomipramine) are prescribed, as it may alleviate potentially otherwise-limiting side-effects (blurry vision, dry mouth, urinary hesitancy/retention, etc.). This practice can make drugs of otherwise indispensably potent value more tolerable to certain patients and spare them needless suffering, hence-reducing the overall side-effect burden or concern thereof.
Although clomipramine shows around 100- to 200-fold preference in affinity for the serotonin transporter (SERT) over the norepinephrine transporter (NET), its major active metabolite, desmethylclomipramine (norclomipramine), binds to the NET with very high affinity (Ki = 0.32 nM) and with dramatically reduced affinity for the SERT (Ki = 31.6 nM). Moreover, desmethylclomipramine circulates at concentrations that are approximately twice those of clomipramine. In accordance, occupancy of both the SERT and the NET has been shown with clomipramine administration in positron emission tomography studies with humans and non-human primates. As such, clomipramine is in fact a fairly balanced SNRI rather than only a serotonin reuptake inhibitor (SRI).
The antidepressant effects of clomipramine are thought to be due to reuptake inhibition of serotonin and norepinephrine, while serotonin reuptake inhibition only is thought to be responsible for the effectiveness of clomipramine in the treatment of OCD. Conversely, antagonism of the H1, α1-adrenergic, and muscarinic acetylcholine receptors is thought to contribute to its side effects. Blockade of the H1 receptor is specifically responsible for the antihistamine effects of clomipramine and side effects like sedation and somnolence (sleepiness). Antagonism of the α1-adrenergic receptor is thought to cause orthostatic hypotension and dizziness. Inhibition of muscarinic acetylcholine receptors is responsible for the anticholinergic side effects of clomipramine like dry mouth, constipation, urinary retention, blurred vision, and cognitive/memory impairment. In overdose, sodium channel blockade in the brain is believed to cause the coma and seizures associated with TCAs while blockade of sodium channels in the heart is considered to cause cardiac arrhythmias, cardiac arrest, and death. On the other hand, sodium channel blockade is also thought to contribute to the analgesic effects of TCAs, for instance in the treatment of neuropathic pain.
The exceptionally strong serotonin reuptake inhibition of clomipramine likely precludes the possibility of its antagonism of serotonin receptors (which it binds to with more than 100-fold lower affinity than the SERT) resulting in a net decrease in signaling by these receptors. In accordance, while serotonin receptor antagonists like cyproheptadine and chlorpromazine are effective as antidotes against serotonin syndrome, clomipramine is nonetheless capable of inducing this syndrome. In fact, while all TCAs are SRIs and serotonin receptor antagonists to varying extents, the only TCAs that are associated with serotonin syndrome are clomipramine and to a lesser extent its dechlorinated analogue imipramine, which are the two most potent SRIs of the TCAs (and in relation to this have the highest ratios of serotonin reuptake inhibition to serotonin receptor antagonism). As such, whereas other TCAs can be combined with monoamine oxidase inhibitors (with caution due to the risk of hypertensive crisis from NET inhibition; sometimes done in treatment-resistant depressives), clomipramine cannot be due to the risk of serotonin syndrome and death. Unlike the case of its serotonin receptor antagonism, orthostatic hypotension is a common side effect of clomipramine, suggesting that its blockade of the α1-adrenergic receptor is strong enough to overcome the stimulatory effects on the α1-adrenergic receptor of its NET inhibition.
Serotonergic activity
Clomipramine is an extremely strong SRI by all accounts. Its affinity for the SERT was reported in one study using human tissues to be 0.14 nM, which is considerably higher than that of other TCAs. For example, the TCAs with the next highest affinities for the SERT in the study were imipramine, amitriptyline, and dosulepin (dothiepin), with Ki values of 1.4 nM, 4.3 nM, and 8.3 nM, respectively. In addition, clomipramine has a terminal half-life that is around twice as long as that of amitriptyline and imipramine. In spite of these differences however, clomipramine is used clinically at the same usual dosages as other serotonergic TCAs (100–200 mg/day). Some health authorities recommend daily dosage is in the range of 30 to 75 mg in single or divided doses. Initial dosage should be 10 mg/day with gradual increments to 30–150 mg/day in divided doses or as a single dose at bedtime. Health Canada recommends maximum dose for outpatients is preferred at 200 mg/day. Sustained-release 75 mg formulation may be preferable at doses above 150 mg/day (i.e. 200 mg to 250 mg/day). It achieves typical circulating concentrations that are similar in range to those of other TCAs but with an upper limit that is around twice that of amitriptyline and imipramine. For these reasons, clomipramine is the most potent SRI among the TCAs and is far stronger as an SRI than other TCAs at typical clinical dosages. In addition, clomipramine is more potent as an SRI than any selective serotonin reuptake inhibitors (SSRIs); it is more potent than paroxetine, which is the strongest SSRI.
A positron emission tomography study found that a single low dose of 10 mg clomipramine to healthy volunteers resulted in 81.1% occupancy of the SERT, which was comparable to the 84.9% SERT occupancy by 50 mg fluvoxamine. In the study, single doses of 5 to 50 mg clomipramine resulted in 67.2 to 94.0% SERT occupancy while single doses of 12.5 to 50 mg fluvoxamine resulted in 28.4 to 84.9% SERT occupancy. Chronic treatment with higher doses was able to achieve up to 100.0% SERT occupancy with clomipramine and up to 93.6% SERT occupancy with fluvoxamine. Other studies have found 83% SERT occupancy with 20 mg/day paroxetine and 77% SERT occupancy with 20 mg/day citalopram. These results indicate that very low doses of clomipramine are able to substantially occupy the SERT and that clomipramine achieves higher occupancy of the SERT than SSRIs at comparable doses. Moreover, clomipramine may be able to achieve more complete occupancy of the SERT at high doses, at least relative to fluvoxamine.
If the ratios of the 80% SERT occupancy dosage and the approved clinical dosage range are calculated and compared for SSRIs, SNRIs, and clomipramine, it can be deduced that clomipramine is by far the strongest SRI used medically. The lowest approved dosage of clomipramine can be estimated to be roughly comparable in SERT occupancy to the maximum approved dosages of the strongest SSRIs and SNRIs. Because their mechanism of action was originally not known and dose-ranging studies were never conducted, first-generation antipsychotics were dramatically overdosed in patients. It has been suggested that the same may have been true for clomipramine and other TCAs. Nonetheless, there is little doubt that many may, indeed, benefit from much higher doses. 250 mg/d, as mentioned elsewhere, is the typical maximum recommended dose but some people may need as much as 300 mg/d or more to benefit from all clomipramine has to offer beyond its potent SNRI capacity alone.
Obsessive–compulsive disorder
Clomipramine was the first drug that was investigated for and found to be effective in the treatment of OCD. In addition, it was the first drug to be approved by the in the United States for the treatment of OCD. The effectiveness of clomipramine in the treatment of OCD is far greater than that of other TCAs, which are comparatively weak SRIs; a meta-analysis found pre- versus post-treatment effect sizes of 1.55 for clomipramine relative to a range of 0.67 for imipramine and 0.11 for desipramine. In contrast to other TCAs, studies have found that clomipramine and SSRIs, which are more selective SRIs, have similar effectiveness in the treatment of OCD. However, multiple meta-analyses have found that clomipramine nonetheless retains a significant effectiveness advantage relative to SSRIs; in the same meta-analysis mentioned previously, the effect sizes of SSRIs in the treatment of OCD ranged from 0.81 for fluoxetine to 1.36 for sertraline (relative to 1.55 for clomipramine). However, the effectiveness advantage for clomipramine has not been apparent in head-to-head comparisons of clomipramine versus SSRIs for OCD. The differences in effectiveness findings could be due to differences in methodologies across non-head-to-head studies.
Relatively high doses of SSRIs are needed for effectiveness in the treatment of OCD. Studies have found that high dosages of SSRIs above the normally recommended maximums are significantly more effective in OCD treatment than lower dosages (e.g., 250 to 400 mg/day sertraline versus 200 mg/day sertraline). In addition, the combination of clomipramine and SSRIs has also been found to be significantly more effective in alleviating OCD symptoms, and clomipramine is commonly used to augment SSRIs for this reason. Studies have found that intravenous clomipramine, which is associated with very high circulating concentrations of the drug and a much higher ratio of clomipramine to its metabolite desmethylclomipramine, is more effective than oral clomipramine in the treatment of OCD. There is a case report of complete remission from OCD for approximately one month following a massive overdose of fluoxetine, an SSRI with a uniquely long duration of action. Taken together, stronger serotonin reuptake inhibition has consistently been associated with greater alleviation of OCD symptoms, and since clomipramine, at the clinical dosages in which it is employed, is effectively the strongest SRI used medically (see table above), this may underlie its unique effectiveness in the treatment of OCD.
In addition to serotonin reuptake inhibition, clomipramine is also a mild but clinically significant antagonist of the dopamine D1, D2, and D3 receptors at high concentrations. Addition of antipsychotics, which are potent dopamine receptor antagonists, to SSRIs, has been found to significantly augment their effectiveness in the treatment of OCD. As such, besides strong serotonin reuptake inhibition, clomipramine at high doses might also block dopamine receptors to treat OCD symptoms, and this could additionally or alternatively be involved in its possible effectiveness advantage over SSRIs. For this reason, it may also be that augmentation with neuroleptics (a common procedure in the occurrence of inadequate response to monotherapy with an SRI) is needed with less frequency with clomipramine relative to SSRIs, the latter of-which apparently lack significant activity as dopamine-receptor antagonists.
Although clomipramine is probably more effective in the treatment of OCD compared to SSRIs, it is greatly inferior to them in terms of tolerability and safety due to its lack of selectivity for the SERT and promiscuous pharmacological activity. In addition, clomipramine has high toxicity in overdose and can potentially result in death, whereas death rarely, if ever, occurs with overdose of SSRIs. It is for these reasons that clomipramine, in spite of potentially superior effectiveness to SSRIs, is now rarely used as a first-line agent in the treatment of OCD, with SSRIs being used as first-line therapies instead and clomipramine generally being reserved for more severe cases and as a second-line agent.
Pharmacokinetics
The oral bioavailability of clomipramine is approximately 50%. Peak plasma concentrations occur around 2–6 hours (with an average of 4.7 hours) after taking clomipramine orally and are in the range of 56–154 ng/mL (178–489 nmol/L). Steady-state concentrations of clomipramine are around 134–532 ng/mL (426–1,690 nmol/L), with an average of 218 ng/mL (692 nmol/L), and are reached after 7 to 14 days of repeated dosing. Steady-state concentrations of the active metabolite, desmethylclomipramine, are around 230–550 ng/mL (730–1,750 nmol/L). The volume of distribution (Vd) of clomipramine is approximately 17 L/kg. It binds approximately 97–98% to plasma proteins, primarily to albumin. Clomipramine is metabolized in the liver mainly by CYP2D6. It has a terminal half-life of 32 hours, and its N-desmethyl metabolite, desmethylclomipramine, has a terminal half-life of approximately 69 hours. Clomipramine is mostly excreted in urine (60%) and feces (32%).
Although the normal maximum-recommended total daily dosage of clomipramine is 250 milligrams, treatment-resistant cases of depression and obsessive-compulsive disorder may require corresponding doses within the range of 255 to 300 milligrams. Indeed, doses of 375 milligrams per day, sometimes in combination with venlafaxine or aripiprazole, have not only been necessary but, remarkably, relatively well-tolerated. Caution, however, is generally prudent when doing this, as seizures, which are more likely to occur with clomipramine than every other tricyclic antidepressant besides maprotiline, become more and more of a risk beyond the normally-recommended upper-ceiling. At daily doses ≤ 250 m.g., the incidence of seizures may be reliably estimated to be around the order of 0.48%. (All tricyclic antidepressants technically lower the seizure-threshold but this is only significant with amoxapine, maprotiline and, indeed, clomipramine.)
Dose-increases between 25 m.g. and 150 m.g., barring significant drug-drug interactions which may elevate clomipramine blood-levels, should be titrated in doses of 50 m.g. (25 m.g. in the case of panic disorder and 10 to 25 m.g. in the cases of premature ejaculation and narcoleptic cataplexy) and above 150 m.g. in 25 m.g. increments. Average optimal total daily doses for depression (whether mild or severe), premature ejaculation, cataplexy-narcolepsy, obsessive-compulsive disorder, panic disorder and trichotilomania respectively are (in milligrams) 150, 50, 25 - 75, 150 - 250, 50 - 150 and 150 - 200. Some consider the minimum optimally-therapeutic dose of clomipramine in obsessive-compulsive disorder, which often requires much higher levels of serotoninergic concentration than other indications for these drugs, to be 200, rather than 150, milligrams per day. For premature ejaculation, clomipramine can be taken prn 3 to 5 hours before attempted sexual intercourse.
Chemistry
Clomipramine is a tricyclic compound, specifically a dibenzazepine, and possesses three rings fused together with a side chain attached in its chemical structure. Other dibenzazepine TCAs include imipramine, desipramine, and trimipramine. Clomipramine is a derivative of imipramine with a chlorine atom added to one of its rings and is also known as 3-chloroimipramine. It is a tertiary amine TCA, with its side chain-demethylated metabolite desmethylclomipramine being a secondary amine. Other tertiary amine TCAs include amitriptyline, imipramine, dosulepin (dothiepin), doxepin, and trimipramine. The chemical name of clomipramine is 3-(3-chloro-10,11-dihydro-5H-dibenzo[b,f]azepin-5-yl)-N,N-dimethylpropan-1-amine and its free base form has a chemical formula of C19H23ClN2 with a molecular weight of 314.857 g/mol. The drug is used commercially almost exclusively as the hydrochloride salt; the free base has been used rarely. The CAS Registry Number of the free base is 303-49-1 and of the hydrochloride is 17321–77–6.
History
Clomipramine was developed by Geigy as a chlorinated derivative of imipramine. It was first referenced in the literature in 1961 and was patented in 1963. The drug was first approved for medical use in Europe in the treatment of depression in 1970, and was the last of the major TCAs to be marketed. In fact, clomipramine was initially considered to be a "me-too drug" by the FDA, and in relation to this, was declined licensing for depression in the United States. As such, to this day, clomipramine remains the only TCA that is available in the United States that is not approved for the treatment of depression, in spite of the fact that it is a highly effective antidepressant.
Clomipramine was eventually approved in the United States for the treatment of OCD in 1989 and became available in 1990. It was the first drug to be investigated and found effective in the treatment of OCD. The benefits in OCD were first reported by Juan José López-Ibor in 1967, and the first double-blind, placebo-controlled clinical trial of clomipramine for OCD was conducted in 1976, with more rigorous clinical studies that solidified its effectiveness conducted in the 1980s. It remained the "gold standard" for the treatment of OCD for many years until the introduction of the SSRIs, which have since largely superseded it due to greatly improved tolerability and safety (although notably not effectiveness). Clomipramine is the only TCA that has been shown to be effective in the treatment of OCD and that is approved by the FDA for the treatment of OCD; the other TCAs failed clinical trials for this indication, likely due to insufficient serotonergic activity.
Society and culture
Generic names
Clomipramine is the English and French generic name of the drug and its , , and , while clomipramine hydrochloride is its , , , and . Clomipramina is its generic name in Spanish, Portuguese and Italian and its , while clomipramin is its generic name in German and clomipraminum is its generic name in Latin.
Brand names
Clomipramine is marketed throughout the world mainly under the brand names Anafranil and Clomicalm for use in humans and animals, respectively.
Veterinary uses
In the US, clomipramine is only licensed to treat separation anxiety in dogs for which it is sold under the brand name Clomicalm. It has proven effective in the treatment of obsessive–compulsive disorders in cats and dogs. In dogs, it has also demonstrated similar efficacy to fluoxetine in treating tail chasing. In dogs some evidence suggests its efficacy in treating noise phobia.
Clomipramine has also demonstrated efficacy in treating urine spraying in cats. Various studies have been done on the effects of clomipramine on cats to reduce urine spraying/marking behavior. It has been shown to be able to reduce this behavior by up to 75% in a trial period of four weeks.
| Biology and health sciences | Psychiatric drugs | Health |
1605322 | https://en.wikipedia.org/wiki/Reverse%20proxy | Reverse proxy | In computer networks, a reverse proxy or surrogate server is a proxy server that appears to any client to be an ordinary web server, but in reality merely acts as an intermediary that forwards the client's requests to one or more ordinary web servers. Reverse proxies help increase scalability, performance, resilience, and security, but they also carry a number of risks.
Companies that run web servers often set up reverse proxies to facilitate the communication between an Internet user's browser and the web servers. An important advantage of doing so is that the web servers can be hidden behind a firewall on a company-internal network, and only the reverse proxy needs to be directly exposed to the Internet. Reverse proxy servers are implemented in popular open-source web servers. Dedicated reverse proxy servers are used by some of the biggest websites on the Internet.
A reverse proxy is capable of tracking all IP addresses requests that are relayed through it as well as reading and/or modifying any non-encrypted traffic. However, this implicitly means that a threat actor compromising the server could as well.
Reverse proxies differ from forward proxies, which are used when the client is restricted to a private, internal network and asks a forward proxy to retrieve resources from the public Internet.
Uses
Large websites and content delivery networks use reverse proxies, together with other techniques, to balance the load between internal servers. Reverse proxies can keep a cache of static content, which further reduces the load on these internal servers and the internal network. It is also common for reverse proxies to add features such as compression or TLS encryption to the communication channel between the client and the reverse proxy.
Reverse proxies can inspect HTTP headers, which, for example, allows them to present a single IP address to the Internet while relaying requests to different internal servers based on the URL of the HTTP request.
Reverse proxies can hide the existence and characteristics of origin servers. This can make it more difficult to determine the actual location of the origin server / website and, for instance, more challenging to initiate legal action such as takedowns or block access to the website, as the IP address of the website may not be immediately apparent. Additionally, the reverse proxy may be located in a different jurisdiction with different legal requirements, further complicating the takedown process.
Application firewall features can protect against common web-based attacks, like a denial-of-service attack (DoS) or distributed denial-of-service attacks (DDoS). Without a reverse proxy, removing malware or initiating takedowns (while simultaneously dealing with the attack) on one's own site, for example, can be difficult.
In the case of secure websites, a web server may not perform TLS encryption itself, but instead offload the task to a reverse proxy that may be equipped with TLS acceleration hardware. (See TLS termination proxy.)
A reverse proxy can distribute the load from incoming requests to several servers, with each server supporting its own application area. In the case of reverse proxying web servers, the reverse proxy may have to rewrite the URL in each incoming request in order to match the relevant internal location of the requested resource.
A reverse proxy can reduce load on its origin servers by caching static content and dynamic content, known as web acceleration. Proxy caches of this sort can often satisfy a considerable number of website requests, greatly reducing the load on the origin server(s).
A reverse proxy can optimize content by compressing it in order to speed up loading times.
In a technique named "spoon-feeding", a dynamically generated page can be produced all at once and served to the reverse proxy, which can then return it to the client a little bit at a time. The program that generates the page need not remain open, thus releasing server resources during the possibly extended time the client requires to complete the transfer.
Reverse proxies can operate wherever multiple web-servers must be accessible via a single public IP address. The web servers listen on different ports in the same machine, with the same local IP address or, possibly, on different machines with different local IP addresses. The reverse proxy analyzes each incoming request and delivers it to the right server within the local area network.
Reverse proxies can perform A/B testing and multivariate testing without requiring application code to handle the logic of which version is served to a client.
A reverse proxy can add access authentication to a web server that does not have any authentication.
Risks
When the transit traffic is encrypted and the reverse proxy needs to filter/cache/compress or otherwise modify or improve the traffic, the proxy first must decrypt and re-encrypt communications. This requires the proxy to possess the TLS certificate and its corresponding private key, extending the number of systems that can have access to non-encrypted data and making it a more valuable target for attackers.
The vast majority of external data breaches happen either when hackers succeed in abusing an existing reverse proxy that was intentionally deployed by an organisation, or when hackers succeed in converting an existing Internet-facing server into a reverse proxy server. Compromised or converted systems allow external attackers to specify where they want their attacks proxied to, enabling their access to internal networks and systems.
Applications that were developed for the internal use of a company are not typically hardened to public standards and are not necessarily designed to withstand all hacking attempts. When an organisation allows external access to such internal applications via a reverse proxy, they might unintentionally increase their own attack surface and invite hackers.
If a reverse proxy is not configured to filter attacks or it does not receive daily updates to keep its attack signature database up to date, a zero-day vulnerability can pass through unfiltered, enabling attackers to gain control of the system(s) that are behind the reverse proxy server.
Using the reverse proxy of a third party (e.g., Cloudflare, Imperva) places the entire triad of confidentiality, integrity and availability in the hands of the third party who operates the proxy.
If a reverse proxy is fronting many different domains, its outage (e.g., by a misconfiguration or DDoS attack) could bring down all fronted domains.
Reverse proxies can also become a single point of failure if there is no other way to access the back end server.
| Technology | Networks | null |
1605389 | https://en.wikipedia.org/wiki/Infinity%20symbol | Infinity symbol | The infinity symbol () is a mathematical symbol representing the concept of infinity. This symbol is also called a lemniscate, after the lemniscate curves of a similar shape studied in algebraic geometry, or "lazy eight", in the terminology of livestock branding.
This symbol was first used mathematically by John Wallis in the 17th century, although it has a longer history of other uses. In mathematics, it often refers to infinite processes (potential infinity) rather than infinite values (actual infinity). It has other related technical meanings, such as the use of long-lasting paper in bookbinding, and has been used for its symbolic value of the infinite in modern mysticism and literature. It is a common element of graphic design, for instance in corporate logos as well as in older designs such as the Métis flag.
The infinity symbol and several variations of the symbol are available in various character encodings.
History
The lemniscate has been a common decorative motif since ancient times; for instance, it is commonly seen on Viking Age combs.
The English mathematician John Wallis is credited with introducing the infinity symbol with its mathematical meaning in 1655, in his De sectionibus conicis. Wallis did not explain his choice of this symbol. It has been conjectured to be a variant form of a Roman numeral, but which Roman numeral is unclear. One theory proposes that the infinity symbol was based on the numeral for 100 million, which resembled the same symbol enclosed within a rectangular frame. Another proposes instead that it was based on the notation CIↃ used to represent 1,000. Instead of a Roman numeral, it may alternatively be derived from a variant the lower-case form of omega, the last letter in the Greek alphabet.
Perhaps in some cases because of typographic limitations, other symbols resembling the infinity sign have been used for the same meaning. One paper by Leonhard Euler was typeset with an open letterform more closely resembling a reflected and sideways S than a lemniscate (something and even has been used as a stand-in for the infinity symbol itself.
Usage
Mathematics
In mathematics, the infinity symbol is typically used to represent a potential infinity. For instance, in mathematical expressions with summations and limits such as
the infinity sign is conventionally interpreted as meaning that the variable grows arbitrarily large towards infinity, rather than actually taking an infinite value, although other interpretations are possible.
When quantifying actual infinity, infinite entities taken as objects per se, other notations are typically used. For example, (aleph-nought) denotes the smallest infinite cardinal number (representing the size of the set of natural numbers), and (omega) denotes the smallest infinite ordinal number.
The infinity symbol may also be used to represent a point at infinity, especially when there is only one such point under consideration. This usage includes, in particular, the infinite point of a projective line, and the point added to a topological space to form its one-point compactification.
Other technical uses
In areas other than mathematics, the infinity symbol may take on other related meanings. For instance, it has been used in bookbinding to indicate that a book is printed on acid-free paper and will therefore be long-lasting. On cameras and their lenses, the infinity symbol indicates that the lens's focal length is set to an infinite distance, and is "probably one of the oldest symbols to be used on cameras".
Symbolism and literary uses
In modern mysticism, the infinity symbol has become identified with a variation of the ouroboros, an ancient image of a snake eating its own tail that has also come to symbolize the infinite, and the ouroboros is sometimes drawn in figure-eight form to reflect this identification—rather than in its more traditional circular form.
In the works of Vladimir Nabokov, including The Gift and Pale Fire, the figure-eight shape is used symbolically to refer to the Möbius strip and the infinite, as is the case in these books' descriptions of the shapes of bicycle tire tracks and of the outlines of half-remembered people. Nabokov's poem after which he entitled Pale Fire explicitly refers to "the miracle of the lemniscate". Other authors whose works use this shape with its symbolic meaning of the infinite include James Joyce, in Ulysses, and David Foster Wallace, in Infinite Jest.
Graphic design
The well-known shape and meaning of the infinity symbol have made it a common typographic element of graphic design. For instance, the Métis flag, used by the Canadian Métis people since the early 19th century, is based around this symbol. Different theories have been put forward for the meaning of the symbol on this flag, including the hope for an infinite future for Métis culture and its mix of European and First Nations traditions, but also evoking the geometric shapes of Métic dances,, Celtic knots, or Plains First Nations Sign Language.
A rainbow-coloured infinity symbol is also used by the autism rights movement, as a way to symbolize the infinite variation of the people in the movement and of human cognition. The Bakelite company took up this symbol in its corporate logo to refer to the wide range of varied applications of the synthetic material they produced. Versions of this symbol have been used in other trademarks, corporate logos, and emblems including those of Fujitsu, Cell Press, and the 2022 FIFA World Cup.
Encoding
The symbol is encoded in Unicode at and in LaTeX as \infty: . An encircled version is encoded for use as a symbol for acid-free paper.
The Unicode set of symbols also includes several variant forms of the infinity symbol that are less frequently available in fonts in the block Miscellaneous Mathematical Symbols-B.
| Mathematics | Basics | null |
5640856 | https://en.wikipedia.org/wiki/Longfin%20mako%20shark | Longfin mako shark | The longfin mako shark (Isurus paucus) is a species of mackerel shark in the family Lamnidae, with a probable worldwide distribution in temperate and tropical waters. An uncommon species, it is typically lumped together under the name "mako" with its better-known relative, the shortfin mako shark (I. oxyrinchus). The longfin mako is a pelagic species found in moderately deep water, having been reported to a depth of . Growing to a maximum length of , the slimmer build and long, broad pectoral fins of this shark suggest that it is a slower and less active swimmer than the shortfin mako.
Longfin mako sharks are predators that feed on small schooling bony fishes and cephalopods. Whether this shark is capable of elevating its body temperature above that of the surrounding water like the other members of its family is uncertain, though it possesses the requisite physiological adaptations. Reproduction in this species is aplacental viviparous, meaning the embryos hatch from eggs inside the uterus. In the later stages of development, the unborn young are fed nonviable eggs by the mother (oophagy). The litter size is typically two, but may be as many as eight. The longfin mako is of limited commercial value, as its meat and fins are of lower quality than those of other pelagic sharks; however, it is caught unintentionally in low numbers across its range. The International Union for Conservation of Nature has assessed this species as endangered due to its rarity, low reproductive rate, and continuing bycatch mortality. In 2019, alongside the shortfin mako, the IUCN listed the longfin mako as "Endangered".
Taxonomy and phylogeny
The original description of the longfin mako was published in 1966 by Cuban marine scientist Darío Guitart-Manday, in the scientific journal Poeyana, based on three adult specimens from the Caribbean Sea. An earlier synonym of this species may be Lamiostoma belyaevi, described by Glückman in 1964. However, the type specimen designated by Glückman consists of a set of fossil teeth that could not be confirmed as belonging to the longfin mako, thus the name paucus took precedence over belyaevi, despite being published later. The specific epithet paucus is Latin for "few", referring to the rarity of this species relative to the shortfin mako.
The sister species relationship between the longfin and shortfin mako has been confirmed by several phylogenetic studies based on mitochondrial DNA. In turn, the closest relative of the two mako sharks is the great white shark (Carcharodon carcharias). Fossil teeth belonging to the longfin mako have been recovered from the Muddy Creek marl of the Grange Burn formation, south of Hamilton, Australia, and from Mizumani Group in Gifu Prefecture, Japan. Both deposits date to the Middle Miocene Epoch (15–11 million years ago (mya)). The oligo-miocene fossil shark tooth taxon Isurus retroflexus may be the ancestor to or even conspecific with the Longfin Mako.
Distribution and habitat
Widely scattered records suggest that the longfin mako shark has a worldwide distribution in tropical and warm-temperate oceans; the extent of its range is difficult to determine due to confusion with the shortfin mako. In the Atlantic Ocean, it is known from the Gulf Stream off the East Coast of the United States, the Caribbean, and southern Brazil in the west, and from the Iberian Peninsula to Ghana in the east, possibly including the Mediterranean Sea and Cape Verde. In the Indian Ocean, it has been reported from the Mozambique Channel. In the Pacific Ocean, it occurs off Japan and Taiwan, northeastern Australia, a number of islands in the Central Pacific northeast of Micronesia, and southern California.
An inhabitant of the open ocean, the longfin mako generally remains in the upper mesopelagic zone during the day and ascends into the epipelagic zone at night. Off Cuba, it is most frequently caught at a depth of and is rare at depths above . Off New South Wales, most catches occur at a depth of , in areas with a surface temperature around .
Description
The longfin mako is the larger of the two mako and the second-largest species in its family (after the great white), reaching upwards of in length and weighing over ; females grow larger than males. The largest reported longfin mako was a female caught off Pompano Beach, Florida, in February 1984. Large specimens can scale over . This species has a slim, fusiform shape with a long, pointed snout and large eyes that lack nictitating membranes (protective third eyelids). Twelve to 13 tooth rows occur on either side of the upper jaw and 11–13 tooth rows are on either side of the lower jaw. The teeth are large and knife-shaped, without serrations or secondary cusps; the outermost teeth in the lower jaw protrude prominently from the mouth. The gill slits are long and extend onto the top of head.
The pectoral fins are as long or longer than the head, with a nearly straight front margin and broad tips. The first dorsal fin is large with a rounded apex, and is placed behind the pectoral fins. The second dorsal and anal fins are tiny. The caudal peduncle is expanded laterally into strong keels. The caudal fin is crescent-shaped, with a small notch near the tip of the upper lobe. The dermal denticles are elliptical, longer than wide, with three to seven horizontal ridges leading to a toothed posterior margin. The coloration is dark blue to grayish black above and white below. The unpaired fins are dark except for a white rear margin on the anal fin; the pectoral and pelvic fins are dark above and white below with sharp gray posterior margins. In adults and large juveniles, the area beneath the snout, around the jaw, and the origin of the pectoral fins have dusky mottling.
Biology and ecology
The biology of the longfin mako is little-known; it is somewhat common in the western Atlantic and possibly the central Pacific, while in the eastern Atlantic, it is rare and outnumbered over 1,000-fold by the shortfin mako in fishery landings. The longfin mako's slender body and long, broad pectoral fins evoke the oceanic whitetip shark (Carcharhinus longimanus) and the blue shark (Prionace glauca), both slow-cruising sharks of upper oceanic waters. This morphological similarity suggests that the longfin mako is less active than the shortfin mako, one of the fastest and most energetic sharks. Like the other members of its family, this species possesses blood vessel countercurrent exchange systems called the rete mirabilia (Latin for "wonderful net", singular rete mirabile) in its trunk musculature and around its eyes and brain. This system enables other mackerel sharks to conserve metabolic heat and maintain a higher body temperature than their environments, though whether the longfin mako is capable of the same is uncertain.
The longfin mako has large eyes and is attracted to cyalume sticks (chemical lights), implying that it is a visual hunter. Its diet consists mainly of small, schooling bony fishes and squids. In October 1972, a female with the broken bill from a swordfish (Xiphias gladias) lodged in her abdomen was caught in the northeastern Indian Ocean; whether the shark was preying on swordfish as the shortfin mako does, or encountered the swordfish in some other aggressive context is not known. Adult longfin mako have no natural predators except for killer whales, while young individuals may fall prey to larger sharks.
As in other mackerel sharks, the longfin mako is aplacental viviparous and typically gives birth to two pups at a time (one inside each uterus), though a female pregnant with eight well-developed embryos was caught in the Mona Passage near Puerto Rico in January 1983. The developing embryos are oophagous; once they deplete their supply of yolk, they sustain themselves by consuming large quantities of nonviable eggs ovulated by their mother. No evidence of sibling cannibalism is seen, as in the sand tiger shark (Carcharias taurus). The pups measure long at birth, relatively larger than the young of the shortfin mako, and have proportionally longer heads and pectoral fins than the adults. Capture records off Florida suggest that during the winter, females swim into shallow coastal waters to give birth. Male and female sharks reach sexual maturity at lengths around and , respectively.
Human interactions
No attacks on humans have been attributed to the longfin mako shark. Nevertheless, its large size and teeth make it potentially dangerous. This shark is caught, generally in low numbers, as bycatch on longlines intended for tuna, swordfish, and other pelagic sharks, as well as in anchored gillnets and on hook-and-line. The meat is marketed fresh, frozen, or dried and salted, though it is considered to be of poor quality due to its mushy texture. The fins are also considered to be of lower quality for use in shark fin soup, though are valuable enough that captured sharks are often finned at sea. The carcasses may be processed into animal feed and fish meal, while the skin, cartilage, and jaws are also of value.
The most significant longfin mako catches are by Japanese tropical longline fisheries, and those sharks occasionally enter Tokyo fish markets. From 1987 to 1994, United States fisheries reported catches (discarded, as this species is worthless on the North American market) of 2–12 tons per year. Since 1999, retention of this species has been prohibited by the U.S. National Marine Fisheries Service Fishery Management Plan for Atlantic sharks. Longfin mako were once significant in the Cuban longline fishery, comprising one-sixth of the shark landings from 1971 to 1972; more recent data from this fishery are not available. The IUCN has assessed this species as "Vulnerable" due to its uncommonness, low reproductive rate, and susceptibility to shark fishing gear. It has also been listed under Annex I of the Convention on Migratory Species Migratory Shark Memorandum of Understanding. In the North Atlantic, stocks of the shortfin mako have declined 40% or more since the late 1980s, and concerns exist that populations of the longfin mako are following the same trend. In 2019, along with its relative the shortfin mako, the IUCN listed the longfin mako as "Endangered" due to continuing declines alongside 58 elasmobranch species.
| Biology and health sciences | Sharks | Animals |
5643937 | https://en.wikipedia.org/wiki/Music%20and%20mathematics | Music and mathematics | Music theory analyzes the pitch, timing, and structure of music. It uses mathematics to study elements of music such as tempo, chord progression, form, and meter. The attempt to structure and communicate new ways of composing and hearing music has led to musical applications of set theory, abstract algebra and number theory.
While music theory has no axiomatic foundation in modern mathematics, the basis of musical sound can be described mathematically (using acoustics) and exhibits "a remarkable array of number properties".
History
Though ancient Chinese, Indians, Egyptians and Mesopotamians are known to have studied the mathematical principles of sound, the Pythagoreans (in particular Philolaus and Archytas) of ancient Greece were the first researchers known to have investigated the expression of musical scales in terms of numerical ratios, particularly the ratios of small integers. Their central doctrine was that "all nature consists of harmony arising out of numbers".
From the time of Plato, harmony was considered a fundamental branch of physics, now known as musical acoustics. Early Indian and Chinese theorists show similar approaches: all sought to show that the mathematical laws of harmonics and rhythms were fundamental not only to our understanding of the world but to human well-being. Confucius, like Pythagoras, regarded the small numbers 1,2,3,4 as the source of all perfection.
Time, rhythm, and meter
Without the boundaries of rhythmic structure – a fundamental equal and regular arrangement of pulse repetition, accent, phrase and duration – music would not be possible. Modern musical use of terms like meter and measure also reflects the historical importance of music, along with astronomy, in the development of counting, arithmetic and the exact measurement of time and periodicity that is fundamental to physics.
The elements of musical form often build strict proportions or hypermetric structures (powers of the numbers 2 and 3).
Musical form
Musical form is the plan by which a short piece of music is extended. The term "plan" is also used in architecture, to which musical form is often compared. Like the architect, the composer must take into account the function for which the work is intended and the means available, practicing economy and making use of repetition and order. The common types of form known as binary and ternary ("twofold" and "threefold") once again demonstrate the importance of small integral values to the intelligibility and appeal of music.
Frequency and harmony
A musical scale is a discrete set of pitches used in making or describing music. The most important scale in the Western tradition is the diatonic scale but many others have been used and proposed in various historical eras and parts of the world. Each pitch corresponds to a particular frequency, expressed in hertz (Hz), sometimes referred to as cycles per second (c.p.s.). A scale has an interval of repetition, normally the octave. The octave of any pitch refers to a frequency exactly twice that of the given pitch.
Succeeding superoctaves are pitches found at frequencies four, eight, sixteen times, and so on, of the fundamental frequency. Pitches at frequencies of half, a quarter, an eighth and so on of the fundamental are called suboctaves. There is no case in musical harmony where, if a given pitch be considered accordant, that its octaves are considered otherwise. Therefore, any note and its octaves will generally be found similarly named in musical systems (e.g. all will be called doh or A or Sa, as the case may be).
When expressed as a frequency bandwidth an octave A2–A3 spans from 110 Hz to 220 Hz (span=110 Hz). The next octave will span from 220 Hz to 440 Hz (span=220 Hz). The third octave spans from 440 Hz to 880 Hz (span=440 Hz) and so on. Each successive octave spans twice the frequency range of the previous octave.
Because we are often interested in the relations or ratios between the pitches (known as intervals) rather than the precise pitches themselves in describing a scale, it is usual to refer to all the scale pitches in terms of their ratio from a particular pitch, which is given the value of one (often written 1/1), generally a note which functions as the tonic of the scale. For interval size comparison, cents are often used.
{|class="wikitable"
!Commonterm
!Examplename
!Hz
!Multiple offundamental
!Ratio ofwithin octave
!Centswithin octave
|-
|
|style="text-align:center;"|A2
|110
|style="text-align:center;"|
|style="text-align:center;"|
|
|-
|rowspan=2 style="text-align:center;" |Octave
|rowspan=2 style="text-align:center;" |A3
|rowspan=2 |220
|rowspan=2 style="text-align:center;" |
|style="text-align:center;"|
|
|-
|style="text-align:center;"|
|
|-
|
|style="text-align:center;"|E4
|330
|style="text-align:center;"|
|style="text-align:center;"|
|
|-
|rowspan=2 style="text-align:center;" |Octave
|rowspan=2 style="text-align:center;" |A4
|rowspan=2 |440
|rowspan=2 style="text-align:center;" |
|style="text-align:center;"|
|
|-
|style="text-align:center;"|
|
|-
|
|style="text-align:center;"|C5
|550
|style="text-align:center;"|
|style="text-align:center;"|
|
|-
|
|style="text-align:center;"|E5
|660
|style="text-align:center;"|
|style="text-align:center;"|
|
|-
|
|style="text-align:center;"|G5
|770
|style="text-align:center;"|
|style="text-align:center;"|
|
|-
|rowspan=2 style="text-align:center;" |Octave
|rowspan=2 style="text-align:center;" |A5
|rowspan=2 |880
|rowspan=2 style="text-align:center;" |
|style="text-align:center;"|
|
|-
|style="text-align:center;"|
|
|}
Tuning systems
There are two main families of tuning systems: equal temperament and just tuning. Equal temperament scales are built by dividing an octave into intervals which are equal on a logarithmic scale, which results in perfectly evenly divided scales, but with ratios of frequencies which are irrational numbers. Just scales are built by multiplying frequencies by rational numbers, which results in simple ratios between frequencies, but with scale divisions that are uneven.
One major difference between equal temperament tunings and just tunings is differences in acoustical beat when two notes are sounded together, which affects the subjective experience of consonance and dissonance. Both of these systems, and the vast majority of music in general, have scales that repeat on the interval of every octave, which is defined as frequency ratio of 2:1. In other words, every time the frequency is doubled, the given scale repeats.
Below are Ogg Vorbis files demonstrating the difference between just intonation and equal temperament. You might need to play the samples several times before you can detect the difference.
– this sample has half-step at 550 Hz (C in the just intonation scale), followed by a half-step at 554.37 Hz (C in the equal temperament scale).
– this sample consists of a "dyad". The lower note is a constant A (440 Hz in either scale), the upper note is a C in the equal-tempered scale for the first 1", and a C in the just intonation scale for the last 1". Phase differences make it easier to detect the transition than in the previous sample.
Just tunings
5-limit tuning, the most common form of just intonation, is a system of tuning using tones that are regular number harmonics of a single fundamental frequency. This was one of the scales Johannes Kepler presented in his Harmonices Mundi (1619) in connection with planetary motion. The same scale was given in transposed form by Scottish mathematician and musical theorist, Alexander Malcolm, in 1721 in his 'Treatise of Musick: Speculative, Practical and Historical', and by theorist Jose Wuerschmidt in the 20th century. A form of it is used in the music of northern India.
American composer Terry Riley also made use of the inverted form of it in his "Harp of New Albion". Just intonation gives superior results when there is little or no chord progression: voices and other instruments gravitate to just intonation whenever possible. However, it gives two different whole tone intervals (9:8 and 10:9) because a fixed tuned instrument, such as a piano, cannot change key. To calculate the frequency of a note in a scale given in terms of ratios, the frequency ratio is multiplied by the tonic frequency. For instance, with a tonic of A4 (A natural above middle C), the frequency is 440 Hz, and a justly tuned fifth above it (E5) is simply 440×(3:2) = 660 Hz.
Pythagorean tuning is tuning based only on the perfect consonances, the (perfect) octave, perfect fifth, and perfect fourth. Thus the major third is considered not a third but a ditone, literally "two tones", and is (9:8)2 = 81:64, rather than the independent and harmonic just 5:4 = 80:64 directly below. A whole tone is a secondary interval, being derived from two perfect fifths minus an octave, (3:2)2/2 = 9:8.
The just major third, 5:4 and minor third, 6:5, are a syntonic comma, 81:80, apart from their Pythagorean equivalents 81:64 and 32:27 respectively. According to Carl , "the dependent third conforms to the Pythagorean, the independent third to the harmonic tuning of intervals."
Western common practice music usually cannot be played in just intonation but requires a systematically tempered scale. The tempering can involve either the irregularities of well temperament or be constructed as a regular temperament, either some form of equal temperament or some other regular meantone, but in all cases will involve the fundamental features of meantone temperament. For example, the root of chord ii, if tuned to a fifth above the dominant, would be a major whole tone (9:8) above the tonic. If tuned a just minor third (6:5) below a just subdominant degree of 4:3, however, the interval from the tonic would equal a minor whole tone (10:9). Meantone temperament reduces the difference between 9:8 and 10:9. Their ratio, (9:8)/(10:9) = 81:80, is treated as a unison. The interval 81:80, called the syntonic comma or comma of Didymus, is the key comma of meantone temperament.
Equal temperament tunings
In equal temperament, the octave is divided into equal parts on the logarithmic scale. While it is possible to construct equal temperament scale with any number of notes (for example, the 24-tone Arab tone system), the most common number is 12, which makes up the equal-temperament chromatic scale. In western music, a division into twelve intervals is commonly assumed unless it is specified otherwise.
For the chromatic scale, the octave is divided into twelve equal parts, each semitone (half-step) is an interval of the twelfth root of two so that twelve of these equal half steps add up to exactly an octave. With fretted instruments it is very useful to use equal temperament so that the frets align evenly across the strings. In the European music tradition, equal temperament was used for lute and guitar music far earlier than for other instruments, such as musical keyboards. Because of this historical force, twelve-tone equal temperament is now the dominant intonation system in the Western, and much of the non-Western, world.
Equally tempered scales have been used and instruments built using various other numbers of equal intervals. The 19 equal temperament, first proposed and used by Guillaume Costeley in the 16th century, uses 19 equally spaced tones, offering better major thirds and far better minor thirds than normal 12-semitone equal temperament at the cost of a flatter fifth. The overall effect is one of greater consonance. Twenty-four equal temperament, with twenty-four equally spaced tones, is widespread in the pedagogy and notation of Arabic music. However, in theory and practice, the intonation of Arabic music conforms to rational ratios, as opposed to the irrational ratios of equally tempered systems.
While any analog to the equally tempered quarter tone is entirely absent from Arabic intonation systems, analogs to a three-quarter tone, or neutral second, frequently occur. These neutral seconds, however, vary slightly in their ratios dependent on maqam, as well as geography. Indeed, Arabic music historian Habib Hassan Touma has written that "the breadth of deviation of this musical step is a crucial ingredient in the peculiar flavor of Arabian music. To temper the scale by dividing the octave into twenty-four quarter-tones of equal size would be to surrender one of the most characteristic elements of this musical culture."
53 equal temperament arises from the near equality of 53 perfect fifths with 31 octaves, and was noted by Jing Fang and Nicholas Mercator.
Connections to mathematics
Set theory
Musical set theory uses the language of mathematical set theory in an elementary way to organize musical objects and describe their relationships. To analyze the structure of a piece of (typically atonal) music using musical set theory, one usually starts with a set of tones, which could form motives or chords. By applying simple operations such as transposition and inversion, one can discover deep structures in the music. Operations such as transposition and inversion are called isometries because they preserve the intervals between tones in a set.
Abstract algebra
Expanding on the methods of musical set theory, some theorists have used abstract algebra to analyze music. For example, the pitch classes in an equally tempered octave form an abelian group with 12 elements. It is possible to describe just intonation in terms of a free abelian group.
Transformational theory is a branch of music theory developed by David Lewin. The theory allows for great generality because it emphasizes transformations between musical objects, rather than the musical objects themselves.
Theorists have also proposed musical applications of more sophisticated algebraic concepts. The theory of regular temperaments has been extensively developed with a wide range of sophisticated mathematics, for example by associating each regular temperament with a rational point on a Grassmannian.
The chromatic scale has a free and transitive action of the cyclic group , with the action being defined via transposition of notes. So the chromatic scale can be thought of as a torsor for the group.
Numbers and series
Some composers have incorporated the golden ratio and Fibonacci numbers into their work.
Category theory
The mathematician and musicologist Guerino Mazzola has used category theory (topos theory) for a basis of music theory, which includes using topology as a basis for a theory of rhythm and motives, and differential geometry as a basis for a theory of musical phrasing, tempo, and intonation.
Musicians who were or are also notable mathematicians
Albert Einstein – Accomplished pianist and violinist.
Art Garfunkel (Simon & Garfunkel) – Masters in Mathematics Education, Columbia University
Brian Cox – Professor of particle physics in the School of Physics and Astronomy at the University of Manchester.
Brian May (Queen) – BSc (Hons) in Mathematics and Physics, PhD in Astrophysics, both from Imperial College London.
Brian Wecht (Ninja Sex Party) – PhD in particle physics, University of California, San Diego
Dan Snaith – PhD Mathematics, Imperial College London
Delia Derbyshire – BA in mathematics and music from Cambridge.
Donald Knuth – Knuth is an organist and a composer. In 2016 he completed a musical piece for organ titled "Fantasia Apocalyptica". It was premièred in Sweden on January 10, 2018
Ethan Port (Savage Republic) – PhD Mathematics, University of Southern California
Gregg Turner (Angry Samoans) – PhD Mathematics, Claremont Graduate University
Jerome Hines – Five articles published in Mathematics Magazine 1951–1956.
Jonny Buckland (Coldplay) – Studied astronomy and mathematics at University College London.
Kit Armstrong – Degree in music and MSc in mathematics.
Manjul Bhargava – Plays the tabla, won the Fields Medal in 2014.
Phil Alvin (The Blasters) – Mathematics, University of California, Los Angeles
Philip Glass – Studied mathematics and philosophy at the University of Chicago.
Robert Schneider (The Apples in Stereo) – PhD Mathematics, Emory University
Tom Lehrer – BA mathematics from Harvard University.
William Herschel – Astronomer and played the oboe, violin, harpsichord and organ. He composed 24 symphonies and many concertos, as well as some church music.
| Mathematics | Applied mathematics | null |
5644788 | https://en.wikipedia.org/wiki/Merycoidodontoidea | Merycoidodontoidea | Merycoidodontoidea, previously known as "oreodonts" or "ruminating hogs," are an extinct superfamily of prehistoric cud-chewing artiodactyls with short faces and fang-like canine teeth. As their name implies, some of the better known forms were generally hog-like, and the group has traditionally been placed within the Suina (pigs, peccaries and their ancestors), though some recent work suggests they may have been more closely related to camels. "Oreodont" means "mountain teeth," referring to the appearance of the molars. Most oreodonts were sheep-sized, though some genera grew to the size of cattle. They were heavy-bodied, with short four-toed hooves and comparatively long tails.
The animals would have looked rather pig- or sheep-like, but features of their teeth indicate they were more closely related to camelids. They were most likely woodland and grassland browsers, and were widespread in North America during the Oligocene and Miocene. Later forms diversified to suit a range of different habitats. For example, Promerycochoerus had adaptations suggesting a semiamphibious lifestyle, similar to that of modern hippos.
Taxonomy
The two families of oreodonts are the Merycoidodontidae (originally known as Oreodontidae) which contains all of the advanced species, and the Agriochoeridae, smaller, primitive oreodonts. Together they form the now-extinct suborder Oreodonta. Oreodonts may have been distantly related to pigs, hippopotamuses, and the pig-like peccaries. Indeed, some scholars place Merycoidodontidae within the pig-related suborder Suina (Suiformes). Other scholars place oreodonts closer to camels in the suborder Tylopoda. Still, other experts put the oreodonts together with the short-lived cainotheres in the taxonomic suborder Ancodonta comprising these two groups of extinct ancodonts. All scholars agree, however, that the oreodont was an early form of even-toed ungulate, belonging to the order Artiodactyla. Today, most evidence points towards the oreodonts being tylopods, along with camels, xiphodonts, and protoceratids.
Over 50 genera of Oreodonta have been described in the paleozoological literature. However, oreodonts are widely considered to be taxonomically oversplit, and many of these genera may prove to be synonymous. The last researchers to fully review oreodont taxonomy, C. Bertrand Schultz and Charles H. Falkenbach, have been criticized for erecting excessive numbers of genera, based in part on apparent anatomical differences between different specimens that were actually taphonomic deformations due to postburial forces. Undeformed skulls would be placed in one genus, while skulls crushed from side to side would be placed in a second genus and skulls crushed from front to back would be placed in a third genus. Researchers are beginning to restudy oreodonts and synonymize many genera, but only a few groups had been reviewed.
Natural history
This diverse group of stocky prehistoric mammals grazed amid the grasslands, prairies, or savannas of North and Central America throughout much of the Cenozoic era. First appearing 48 million years ago (Mya) during the warm Eocene epoch of the Paleogene period, the oreodonts dominated the American landscape 34 to 23 Mya during the dry Oligocene epoch, but they mysteriously disappeared 4 Mya during the colder Pliocene epoch of the late Neogene period.
Today, fossil jaws and teeth of the Oreodonta are commonly found amid the 'Oreodon beds' (White River Fauna) of the White River badlands in South Dakota, Nebraska, Colorado and Wyoming. Many oreodont bones have also been reported at the John Day Fossil Beds National Monument in Oregon. Some oreodonts have been found at Agate Fossil Beds National Monument. In Oligocene/Miocene Florida, oreodonts are surprisingly rare. Instead of the swarms found elsewhere, only six genera of oreodonts are known to have ranged there, and only one, Mesoreodon, is known from a single, good skeleton.
Lifestyle
The majority of oreodonts are presumed to have lived in herds, as suggested by the thousands of individuals in the various mass mortalities seen in the White River Badlands, Nebraska Oreodont beds, or Chula Vista, California.
Diversity
Oreodonts underwent a huge diversification during the Oligocene and Miocene, adapting to a number of ecological niches, including:
Semiaquatic – hippo-like Promerycochoerus
Trunked browser – tapir-like Brachycrus
Large grazer – cow-sized Eporeodon
Medium grazer – goat-like Merycoidodon
Small desert herbivore – goat- to cat-sized Sespia
Medium desert herbivore – Mesoreodon and the ubiquitous Leptauchenia
Classification
The family Merycoidodontidae is divided into eleven subfamilies, with four genera not included in any subfamily (incertae sedis) because they are either regarded as basal oreodonts, or their status within the family remains uncertain.
Family †Merycoidodontidae
subfamily incertae sedis
†Aclistomycter
†Merychyus
†Pseudogenetochoerus
†Pseudoleptauchenia
Subfamily †Oreonetinae
†Bathygenys
†Megabathygenys
†Oreonetes
Subfamily †Leptaucheniinae
Tribe †Leptaucheniini
†Limnenetes
†Leptauchenia
Tribe †Sespiini
†Sespia
Subfamily †Merycoidodontinae (syn. Oreodontinae)
†Merycoidodon (syn. Blickohyus, Genetochoerus, Oreodon, Otionohyus, Paramerycoidodon, Prodesmatochoerus, Promesoreodon, Subdesmatochoerus)
†Mesoreodon
Subfamily †Miniochoerinae
†Miniochoerus (syn. Paraminiochoerus, Parastenopsochoerus, Platyochoerus, Pseudostenopsochoerus, Stenopsochoerus)
Subfamily †Desmatochoerinae
†Desmatochoerus
†Eporeodon
†Megoreodon
Subfamily †Promerycochoerinae
†Promerycochoerus
†Merycoides
Subfamily †Merychyinae
†Oreodontoides
†Paroreodon
†Merycoides
†Merychyus
Subfamily †Eporeodontinae
†Dayohyus (syn. Eucrotaphus deemed nomen dubium)
†Eporeodon
Subfamily †Phenacocoelinae
†Phenacocoelus
†Hypsiops
Subfamily †Ticholeptinae
†Mediochoerus
†Ticholeptus
†Ustatochoerus
Subfamily †Merycochoerinae
†Merycochoerus
†Brachycrus
In Lander (1998) the classification of Oreodontoidea was as follows:
Family Agriochoeridae Leidy, 1869 (syn. Artionychidae, Eomerycidae, Protoreodontidae)
Subfamily Agriochoerinae Gill, 1872 (syn. Diplobunopsinae)
Agriochoerus Leidy, 1850b (syn. Agriomeryx, Artionyx, Coloreodon, Diplobunops, Eomeryx, Merycopater)
Subfamily Protoreodontinae Scott, 1890
Protoreodon Scott and Osborn, 1887 (syn. Agriotherium, Chorotherium, Hyomeryx, Mesagriochoerus, Protagriochoerus)
Family Merycoidodontidae
Subfamily Bathygeniinae Lander, 1998
Bathygenys Douglass, 1901 (syn. Megabathygenys, Parabathygenys)
Subfamily Aclistomycterinae Lander, 1998
Aclistomycter Wilson, 1971
Subfamily Leptaucheniinae Schultz and Falkenbach, 1940
Leptauchenia Leidy, 1856 (syn. Brachymeryx, Cyclopidius, Hadroleptauchenia, Limnenetes, Pithecistes, Pseudocyclopidius, Pseudoleptauchenia)
Sespia Stock, 1930 (syn. Megasespia)
Subfamily Miniochoerinae Schultz and Falkenbach, 1956 (syn. Oreonetinae, ?Cotylopinae, ?Merycoidodontinae, ?Oreodontinae)
Subfamily Eucrotaphinae Lander, 1998
Subfamily Merycochoerinae Schultz and Falkenbach, 1940 (syn. Desmatochoerinae, Eporeodontinae, Promerycochoerinae)
Subfamily Phenacocoelinae Schultz and Falkenbach, 1950
Subfamily Ticholeptinae Schultz and Falkenbach, 1941 (syn. Merychyinae)
| Biology and health sciences | Other artiodactyla | Animals |
9556852 | https://en.wikipedia.org/wiki/Quark%20epoch | Quark epoch | In physical cosmology, the quark epoch was the period in the evolution of the early universe when the fundamental interactions of gravitation, electromagnetism, the strong interaction and the weak interaction had taken their present forms, but the temperature of the universe was still too high to allow quarks to bind together to form hadrons. The quark epoch began approximately 10−12 seconds after the Big Bang, when the preceding electroweak epoch ended as the electroweak interaction separated into the weak interaction and electromagnetism. During the quark epoch, the universe was filled with a dense, hot quark–gluon plasma, containing quarks, leptons and their antiparticles. Collisions between particles were too energetic to allow quarks to combine into mesons or baryons. The quark epoch ended when the universe was about 10−6 seconds old, when the average energy of particle interactions had fallen below the binding energy of hadrons. The following period, when quarks became confined within hadrons, is known as the hadron epoch.
| Physical sciences | Physical cosmology | Astronomy |
12024353 | https://en.wikipedia.org/wiki/Blastocladiomycota | Blastocladiomycota | Blastocladiomycota is one of the currently recognized phyla within the kingdom Fungi. Blastocladiomycota was originally the order Blastocladiales within the phylum Chytridiomycota until molecular and zoospore ultrastructural characters were used to demonstrate it was not monophyletic with Chytridiomycota. The order was first erected by Petersen for a single genus, Blastocladia, which was originally considered a member of the oomycetes. Accordingly, members of Blastocladiomycota are often referred to colloquially as "chytrids." However, some feel "chytrid" should refer only to members of Chytridiomycota. Thus, members of Blastocladiomycota are commonly called "blastoclads" by mycologists. Alternatively, members of Blastocladiomycota, Chytridiomycota, and Neocallimastigomycota lumped together as the zoosporic true fungi. Blastocladiomycota contains 5 families and approximately 12 genera. This early diverging branch of kingdom Fungi is the first to exhibit alternation of generations. As well, two (once) popular model organisms—Allomyces macrogynus and Blastocladiella emersonii—belong to this phylum.
Morphology
Morphology in Blastocladiomycota varies greatly. For example, members of Coelomycetaceae are simple, unwalled, and plasmodial in nature. Some species in Blastocladia are monocentric, like the chytrids, while others are polycentric. The most remarkable are those members, such as Allomyces that demonstrate determinant, differentiated growth.
Reproduction/life cycle
Sexual reproduction
As stated above, some members of Blastocladiomycota exhibit alternation of generations. Members of this phylum also exhibit a form of sexual reproduction known as anisogamy. Anisogamy is the fusion of two sexual gametes that differ in morphology, usually size. In Allomyces, the thallus (body) is attached by rhizoids, and has an erect trunk on which reproductive organs are formed at the end of branches. During the haploid phase, the thallus forms male and female gametangia that release flagellated gametes. Gametes attract one another using pheromones and eventually fuse to form a Zygote. The germinated zygote produces a diploid thallus with two types of sporangia: thin-walled zoosporangia and thick walled resting spores (or sporangia). The thin walled sporangia release diploid zoospores. The resting spore serves as a means of enduring unfavorable conditions. When conditions are favorable again, meiosis occurs and haploid zoospores are released. These germinate and grow into haploid thalli that will produce “male” and “female” gametangia and gametes.
Asexual reproduction
Similar to Chytridiomycota, members of Blastocladiomycota produce asexual zoospores to colonize new substrates. In some species, a curious phenomenon has been observed in the asexual zoospores. From time to time, asexual zoospores will pair up and exchange cytoplasm but not nuclei.
Ecological roles
Similar to Chytridiomycota, members of Blastocladiomycota are capable of growing on refractory materials, such as pollen, keratin, cellulose, and chitin. The best known species, however, are the parasites. Members of Catenaria are parasites of nematodes, midges, crustaceans, and even another blastoclad, Coelomyces. Members of the genus Physoderma and Urophlyctis are obligate plant parasites. Of economic importance is Physoderma maydis, a parasite of maize and the causal agent of brown spot disease. Also of importance are the species of Urophlyctis that parasitize alfalfa. However, ecologically, Physoderma are important parasites of many aquatic and marsh angiosperms. Also of human interest, for health reasons, are members of Coelomomyces, an unusual parasite of mosquitoes that requires an alternate crustacean host (the same one parasitized by members of Catenaria) to complete its life cycle. Others that are ecologically interesting include a parasite of water bears and the zooplankter Daphnia.
Taxonomy
Based on the work of Philippe Silar and "The Mycota: A Comprehensive Treatise on Fungi as Experimental Systems for Basic and Applied Research" and synonyms from "Part 1- Virae, Prokarya, Protists, Fungi".
Phylum Blastocladiomycota Tehler, 1988 ex James 2006 [Allomycota Cavalier-Smith 1981; Allomycotina Cavalier-Smith 1998]
Class Physodermatomycetes Tedersoo et al. 2018
Order Physodermatales Cavalier-Smith 2012 [Physodermatineae]
Family Physodermataceae [Urophlyctidaceae Schroeter 1886]
Genus Paraphysoderma Boussiba, Zarka & James 2011
Genus Physoderma Wallroth 1833 [Oedomyces Saccardo ex Trabut 1894]
Genus Urophlyctis Schröter 1886
Class Blastocladiomycetes James 2006 [Allomycetes Cavalier-Smith 1998]
Order Blastocladiales Petersen 1909 sensu Cavalier-Smith 2012 [Allomycales; Blastocladiineae]
Genus Endoblastidium Codreanu 1931
Genus Polycaryum Stempell 1901
Genus Nematoceromyces Doweld 2013
Genus Blastocladiella Matthews 1937 [Clavochytridium Couch & Cox 1939; Sphaerocladia Stüben 1939]
Family Coelomomycetaceae Couch 1962
Genus Callimastix Weissenberg 1912
Genus Coelomycidium Debaisieux 1919
Genus Coelomomyces Keilin 1921 [Zografia Bogayavlensky 1922]
Family Sorochytriaceae Dewel 1985
Genus Sorochytrium Dewel 1985
Family Catenariaceae Couch 1945
Genus Catenaria Sorokin 1889 non Roussel 1806 [Perirhiza Karling 1946]
Genus Catenophlyctis Karling 1965
Family Blastocladiaceae Petersen 1909
Genus Blastocladiopsis Sparrow 1950
Genus Microallomyces Emerson & Robertson 1974
Genus Blastocladia Reinsch 1877
Genus Allomyces Butler 1911 [Septocladia Coker & Grant 1922]
| Biology and health sciences | Basics | Plants |
12029940 | https://en.wikipedia.org/wiki/Port%20of%20Yokohama | Port of Yokohama | The is operated by the Port and Harbor Bureau of the City of Yokohama in Japan. It opens onto Tokyo Bay. The port is located at a latitude of 35.27–00°N and a longitude of 139.38–46°E. To the south lies the Port of Yokosuka; to the north, the ports of Kawasaki and Tokyo.
History
The Treaty of Amity and Commerce of 1858 specified Kanagawa as an open port. The Port of Yokohama formally opened to foreign trade on the 2nd of June 1859. The port grew rapidly through the Meiji and Taisho periods as a center for raw silk export and technology import.
Current facilities
Yokohama Port has ten major piers. Honmoku Pier is the port's core facility with 24 berths including 14 container berths. Osanbashi Pier handles passenger traffic including cruises, and has customs, immigration and quarantine facilities for international travel.
Detamachi, the "banana pier," is outfitted for receiving fresh fruits and vegetables. Daikoku Pier, on an artificial island measuring 321 hectares, is equipped with container logistics facilities including seven container berths and houses a million square meters of warehouse space at the Yokohama Port Cargo Center.
At Minami Honmoku, the newest facility to be developed, there are two 350 meter operational berths with a depth of 16 meters capable of handling larger post Panamax container ships with 6 mega container cranes for 22 lines of containers. Additional berths are under construction for larger ships in dimensions equal to or exceeding the size of a Mærsk E-class container ship.
Seven berths of Mizuho Pier are used by the United States Forces Japan. Additional piers handle timber and serve other functions.
Statistics
In 2013, the Port of Yokohama served 37,706 ships. It handled 271,276,977 tons of cargo and 2,888,220 TEU containers. The total value of the cargo was 10,921,656 million yen.
APM Terminals Yokohama facility at Minami Honmoku was recognised in 2013 as the most productive container terminal in the world averaging 163 crane moves per hour, per ship between the vessel's arrival and departure at the berth.
| Technology | Specific piers and ports | null |
1039124 | https://en.wikipedia.org/wiki/Stellar%20structure | Stellar structure | Stellar structure models describe the internal structure of a star in detail and make predictions about the luminosity, the color and the future evolution of the star. Different classes and ages of stars have different internal structures, reflecting their elemental makeup and energy transport mechanisms.
Heat transport
For energy transport refer to Radiative transfer.
Different layers of the stars transport heat up and outwards in different ways, primarily convection and radiative transfer, but thermal conduction is important in white dwarfs.
Convection is the dominant mode of energy transport when the temperature gradient is steep enough so that a given parcel of gas within the star will continue to rise if it rises slightly via an adiabatic process. In this case, the rising parcel is buoyant and continues to rise if it is warmer than the surrounding gas; if the rising parcel is cooler than the surrounding gas, it will fall back to its original height. In regions with a low temperature gradient and a low enough opacity to allow energy transport via radiation, radiation is the dominant mode of energy transport.
The internal structure of a main sequence star depends upon the mass of the star.
In stars with masses of 0.3–1.5 solar masses (), including the Sun, hydrogen-to-helium fusion occurs primarily via proton–proton chains, which do not establish a steep temperature gradient. Thus, radiation dominates in the inner portion of solar mass stars. The outer portion of solar mass stars is cool enough that hydrogen is neutral and thus opaque to ultraviolet photons, so convection dominates. Therefore, solar mass stars have radiative cores with convective envelopes in the outer portion of the star.
In massive stars (greater than about 1.5 ), the core temperature is above about 1.8×107 K, so hydrogen-to-helium fusion occurs primarily via the CNO cycle. In the CNO cycle, the energy generation rate scales as the temperature to the 15th power, whereas the rate scales as the temperature to the 4th power in the proton-proton chains. Due to the strong temperature sensitivity of the CNO cycle, the temperature gradient in the inner portion of the star is steep enough to make the core convective. In the outer portion of the star, the temperature gradient is shallower but the temperature is high enough that the hydrogen is nearly fully ionized, so the star remains transparent to ultraviolet radiation. Thus, massive stars have a radiative envelope.
The lowest mass main sequence stars have no radiation zone; the dominant energy transport mechanism throughout the star is convection.
Equations of stellar structure
The simplest commonly used model of stellar structure is the spherically symmetric quasi-static model, which assumes that a star is in a steady state and that it is spherically symmetric. It contains four basic first-order differential equations: two represent how matter and pressure vary with radius; two represent how temperature and luminosity vary with radius.
In forming the stellar structure equations (exploiting the assumed spherical symmetry), one considers the matter density , temperature , total pressure (matter plus radiation) , luminosity , and energy generation rate per unit mass in a spherical shell of a thickness at a distance from the center of the star. The star is assumed to be in local thermodynamic equilibrium (LTE) so the temperature is identical for matter and photons. Although LTE does not strictly hold because the temperature a given shell "sees" below itself is always hotter than the temperature above, this approximation is normally excellent because the photon mean free path, , is much smaller than the length over which the temperature varies considerably, i.e. .
First is a statement of hydrostatic equilibrium: the outward force due to the pressure gradient within the star is exactly balanced by the inward force due to gravity. This is sometimes referred to as stellar equilibrium.
,
where is the cumulative mass inside the shell at and G is the gravitational constant. The cumulative mass increases with radius according to the mass continuity equation:
Integrating the mass continuity equation from the star center () to the radius of the star () yields the total mass of the star.
Considering the energy leaving the spherical shell yields the energy equation:
,
where is the luminosity produced in the form of neutrinos (which usually escape the star without interacting with ordinary matter) per unit mass. Outside the core of the star, where nuclear reactions occur, no energy is generated, so the luminosity is constant.
The energy transport equation takes differing forms depending upon the mode of energy transport. For conductive energy transport (appropriate for a white dwarf), the energy equation is
where k is the thermal conductivity.
In the case of radiative energy transport, appropriate for the inner portion of a solar mass main sequence star and the outer envelope of a massive main sequence star,
where is the opacity of the matter, is the Stefan–Boltzmann constant, and the Boltzmann constant is set to one.
The case of convective energy transport does not have a known rigorous mathematical formulation, and involves turbulence in the gas. Convective energy transport is usually modeled using mixing length theory. This treats the gas in the star as containing discrete elements which roughly retain the temperature, density, and pressure of their surroundings but move through the star as far as a characteristic length, called the mixing length. For a monatomic ideal gas, when the convection is adiabatic, meaning that the convective gas bubbles don't exchange heat with their surroundings, mixing length theory yields
where is the adiabatic index, the ratio of specific heats in the gas. (For a fully ionized ideal gas, .) When the convection is not adiabatic, the true temperature gradient is not given by this equation. For example, in the Sun the convection at the base of the convection zone, near the core, is adiabatic but that near the surface is not. The mixing length theory contains two free parameters which must be set to make the model fit observations, so it is a phenomenological theory rather than a rigorous mathematical formulation.
Also required are the equations of state, relating the pressure, opacity and energy generation rate to other local variables appropriate for the material, such as temperature, density, chemical composition, etc. Relevant equations of state for pressure may have to include the perfect gas law, radiation pressure, pressure due to degenerate electrons, etc. Opacity cannot be expressed exactly by a single formula. It is calculated for various compositions at specific densities and temperatures and presented in tabular form. Stellar structure codes (meaning computer programs calculating the model's variables) either interpolate in a density-temperature grid to obtain the opacity needed, or use a fitting function based on the tabulated values. A similar situation occurs for accurate calculations of the pressure equation of state. Finally, the nuclear energy generation rate is computed from nuclear physics experiments, using reaction networks to compute reaction rates for each individual reaction step and equilibrium abundances for each isotope in the gas.
Combined with a set of boundary conditions, a solution of these equations completely describes the behavior of the star. Typical boundary conditions set the values of the observable parameters appropriately at the surface () and center () of the star: , meaning the pressure at the surface of the star is zero; , there is no mass inside the center of the star, as required if the mass density remains finite; , the total mass of the star is the star's mass; and , the temperature at the surface is the effective temperature of the star.
Although nowadays stellar evolution models describe the main features of color–magnitude diagrams, important improvements have to be made in order to remove uncertainties which are linked to the limited knowledge of transport phenomena. The most difficult challenge remains the numerical treatment of turbulence. Some research teams are developing simplified modelling of turbulence in 3D calculations.
Rapid evolution
The above simplified model is not adequate without modification in situations when the composition changes are sufficiently rapid. The equation of hydrostatic equilibrium may need to be modified by adding a radial acceleration term if the radius of the star is changing very quickly, for example if the star is radially pulsating. Also, if the nuclear burning is not stable, or the star's core is rapidly collapsing, an entropy term must be added to the energy equation.
| Physical sciences | Stellar astronomy | null |
1039146 | https://en.wikipedia.org/wiki/Directional%20drilling | Directional drilling | Directional drilling (or slant drilling) is the practice of drilling non-vertical bores. It can be broken down into four main groups: oilfield directional drilling, utility installation directional drilling, directional boring (horizontal directional drilling - HDD), and surface in seam (SIS), which horizontally intersects a vertical bore target to extract coal bed methane.
History
Many prerequisites enabled this suite of technologies to become productive. Probably, the first requirement was the realization that oil wells, or water wells, do not necessarily need to be vertical. This realization was quite slow, and did not really grasp the attention of the oil industry until the late 1920s when there were several lawsuits alleging that wells drilled from a rig on one property had crossed the boundary and were penetrating a reservoir on an adjacent property. Initially, proxy evidence such as production changes in other wells was accepted, but such cases fueled the development of small diameter tools capable of surveying wells during drilling. Horizontal directional drill rigs are developing towards large-scale, micro-miniaturization, mechanical automation, hard stratum working, exceeding length and depth oriented monitored drilling.
Measuring the inclination of a wellbore (its deviation from the vertical) is comparatively simple, requiring only a pendulum. Measuring the azimuth (direction with respect to the geographic grid in which the wellbore was running from the vertical), however, was more difficult. In certain circumstances, magnetic fields could be used, but would be influenced by metalwork used inside wellbores, as well as the metalwork used in drilling equipment. The next advance was in the modification of small gyroscopic compasses by the Sperry Corporation, which was making similar compasses for aeronautical navigation. Sperry did this under contract to Sun Oil (which was involved in a lawsuit as described above), and a spin-off company "Sperry Sun" was formed, which brand continues to this day, absorbed into Halliburton. Three components are measured at any given point in a wellbore in order to determine its position: the depth of the point along the course of the borehole (measured depth), the inclination at the point, and the magnetic azimuth at the point. These three components combined are referred to as a "survey". A series of consecutive surveys are needed to track the progress and location of a wellbore.
Prior experience with rotary drilling had established several principles for the configuration of drilling equipment down hole ("bottom hole assembly" or "BHA") that would be prone to "drilling crooked hole" (i.e., initial accidental deviations from the vertical would be increased). Counter-experience had also given early directional drillers ("DD's") principles of BHA design and drilling practice that would help bring a crooked hole nearer the vertical.
In 1934, H. John Eastman and Roman W. Hines of Long Beach, California, became pioneers in directional drilling when they and George Failing of Enid, Oklahoma, saved the Conroe, Texas, oil field. Failing had recently patented a portable drilling truck. He had started his company in 1931 when he mated a drilling rig to a truck and a power take-off assembly. The innovation allowed rapid drilling of a series of slanted wells. This capacity to quickly drill multiple relief wells and relieve the enormous gas pressure was critical to extinguishing the Conroe fire. In a May, 1934, Popular Science Monthly article, it was stated that "Only a handful of men in the world have the strange power to make a bit, rotating a mile below ground at the end of a steel drill pipe, snake its way in a curve or around a dog-leg angle, to reach a desired objective." Eastman Whipstock, Inc., would become the world's largest directional company in 1973.
Combined, these survey tools and BHA designs made directional drilling possible, but it was perceived as arcane. The next major advance was in the 1970s, when downhole drilling motors (aka mud motors, driven by the hydraulic power of drilling mud circulated down the drill string) became common. These allowed the drill bit to continue rotating at the cutting face at the bottom of the hole, while most of the drill pipe was held stationary. A piece of bent pipe (a "bent sub") between the stationary drill pipe and the top of the motor allowed the direction of the wellbore to be changed without needing to pull all the drill pipe out and place another whipstock. Coupled with the development of measurement while drilling tools (using mud pulse telemetry, networked or wired pipe or electromagnetism (EM) telemetry, which allows tools down hole to send directional data back to the surface without disturbing drilling operations), directional drilling became easier.
Certain profiles cannot be easily drilled while the drill pipe is rotating. Drilling directionally with a downhole motor requires occasionally stopping rotation of the drill pipe and "sliding" the pipe through the channel as the motor cuts a curved path. "Sliding" can be difficult in some formations, and it is almost always slower and therefore more expensive than drilling while the pipe is rotating, so the ability to steer the bit while the drill pipe is rotating is desirable. Several companies have developed tools which allow directional control while rotating. These tools are referred to as rotary steerable systems (RSS). RSS technology has made access and directional control possible in previously inaccessible or uncontrollable formations.
Benefits
Wells are drilled directionally for several purposes:
Increasing the exposed section length through the reservoir by drilling through the reservoir at an angle.
Drilling into the reservoir where vertical access is difficult or not possible. For instance an oilfield under a town, under a lake, or underneath a difficult-to-drill formation.
Allowing more wellheads to be grouped together on one surface location can allow fewer rig moves, less surface area disturbance, and make it easier and cheaper to complete and produce the wells. For instance, on an oil platform or jacket offshore, 40 or more wells can be grouped together. The wells will fan out from the platform into the reservoir(s) below. This concept is being applied to land wells, allowing multiple subsurface locations to be reached from one pad, reducing costs.
Drilling along the underside of a reservoir-constraining fault allows multiple productive sands to be completed at the highest stratigraphic points.
Drilling a "relief well" to relieve the pressure of a well producing without restraint (a "blowout"). In this scenario, another well could be drilled starting at a safe distance away from the blowout, but intersecting the troubled wellbore. Then, heavy fluid (kill fluid) is pumped into the relief wellbore to suppress the high pressure in the original wellbore causing the blowout.
Most directional drillers are given a blue well path to follow that is predetermined by engineers and geologists before the drilling commences. When the directional driller starts the drilling process, periodic surveys are taken with a downhole instrument to provide survey data (inclination and azimuth) of the well bore. These pictures are typically taken at intervals between 10 and 150 meters (30–500 feet), with 30 meters (90 feet) common during active changes of angle or direction, and distances of 60–100 meters (200–300 feet) being typical while "drilling ahead" (not making active changes to angle and direction). During critical angle and direction changes, especially while using a downhole motor, a measurement while drilling (MWD) tool will be added to the drill string to provide continuously updated measurements that may be used for (near) real-time adjustments.
This data indicates if the well is following the planned path and whether the orientation of the drilling assembly is causing the well to deviate as planned. Corrections are regularly made by techniques as simple as adjusting rotation speed or the drill string weight (weight on bottom) and stiffness, as well as more complicated and time-consuming methods, such as introducing a downhole motor. Such pictures, or surveys, are plotted and maintained as an engineering and legal record describing the path of the well bore. The survey pictures taken while drilling are typically confirmed by a later survey in full of the borehole, typically using a "multi-shot camera" device.
The multi-shot camera advances the film at time intervals so that by dropping the camera instrument in a sealed tubular housing inside the drilling string (down to just above the drilling bit) and then withdrawing the drill string at time intervals, the well may be fully surveyed at regular depth intervals (approximately every 30 meters (90 feet) being common, the typical length of 2 or 3 joints of drill pipe, known as a stand, since most drilling rigs "stand back" the pipe withdrawn from the hole at such increments, known as "stands").
Drilling to targets far laterally from the surface location requires careful planning and design. The current record holders manage wells over away from the surface location at a true vertical depth (TVD) of only 1,600–2,600 m (5,200–8,500 ft).
This form of drilling can also reduce the environmental cost and scarring of the landscape. Previously, long lengths of landscape had to be removed from the surface. This is no longer required with directional drilling.
Disadvantages
Until the arrival of modern downhole motors and better tools to measure inclination and azimuth of the hole, directional drilling and horizontal drilling was much slower than vertical drilling due to the need to stop regularly and take time-consuming surveys, and due to slower progress in drilling itself (lower rate of penetration). These disadvantages have shrunk over time as downhole motors became more efficient and semi-continuous surveying became possible.
What remains is a difference in operating costs: for wells with an inclination of less than 40 degrees, tools to carry out adjustments or repair work can be lowered by gravity on cable into the hole. For higher inclinations, more expensive equipment has to be mobilized to push tools down the hole.
Another disadvantage of wells with a high inclination was that prevention of sand influx into the well was less reliable and needed higher effort. Again, this disadvantage has diminished such that, provided sand control is adequately planned, it is possible to carry it out reliably.
Stealing oil
In 1990, Iraq accused Kuwait of stealing Iraq's oil through slant drilling.
The United Nations redrew the border after the 1991 Gulf war, which ended the seven-month Iraqi occupation of Kuwait. As part of the reconstruction, 11 new oil wells were placed among the existing 600. Some farms and an old naval base that used to be in the Iraqi side became part of Kuwait.
In the mid-twentieth century, a slant-drilling scandal occurred in the huge East Texas Oil Field.
New technologies
Between 1985 and 1993, the Naval Civil Engineering Laboratory (NCEL) (now the Naval Facilities Engineering Service Center (NFESC)) of Port Hueneme, California developed controllable horizontal drilling technologies. These technologies are capable of reaching (3000–4500 m) and may reach (7500 m) when used under favorable conditions.
Techniques
Wellbore Surveys
Specialized tools determine the wellbore's deviation from vertical (inclination) and its directional orientation (azimuth). This data is vital for trajectory adjustments. These surveys are taken at regular intervals (e.g., every 30-100 meters) to track the wellbore's progress in real time. In critical sections, measurement while drilling (MWD) tools provide continuous downhole measurements for immediate directional corrections as needed. MWD uses gyroscopes, magnetometers, and accelerometers to determine borehole inclination and azimuth while the drilling is being done.
Trajectory Control
Bottom Hole Assembly (BHA): The configuration of drilling equipment near the drill bit (BHA) profoundly influences drilling direction. BHAs can be tailored to promote straight drilling or induce deviations.
Downhole Motors: Specialized mud motors rotate only the drill bit, allowing controlled changes in direction while the majority of the drill string remains stationary.
Rotary Steerable Systems (RSS): Advanced RSS technology enables steering even while the entire drill string is rotating, ensuring greater efficiency and control.
| Technology | Disciplines | null |
1039295 | https://en.wikipedia.org/wiki/Lion%20tamarin | Lion tamarin | The four species of lion tamarins or maned marmosets make up the genus Leontopithecus. They are small New World monkeys named for the mane surrounding their face, similar to the mane of a lion.
Description
Living in the eastern rainforests of Brazil, like all other callitrichids they are arboreal. Lion tamarins weigh up to and are about long, with tails about long. They jump through trees using their fingers to hold on to branches; they use their claws to dig under the bark to search for insects to eat. They also eat some snakes, small lizards, and small fruits. All are endangered or critically endangered, in part because their habitat has been severely disrupted by human development and climate change.
Lion tamarins tend to live in family groups, with both parents sharing different tasks of rearing the yearly twins born to them. The mother nurses her young every two to three hours, and the father carries the babies on his back.
Diurnal tree-dwellers, they sleep in tree cavities at night. They also seek shelter during the hottest part of the day.
Species list
The different species of lion tamarins are easily discernible from each other, based upon the coloration of their fur:
Conservation
Climate change has been affecting the lion tamarins in that cocoa production has taken over their habitat. Mass produced cocoa has been found to thin out surrounding canopy trees in the area. These trees are where lion tamarins mostly reside throughout the day.
| Biology and health sciences | New World monkeys | Animals |
1039305 | https://en.wikipedia.org/wiki/Basilosaurus | Basilosaurus | Basilosaurus (meaning "king lizard") is a genus of large, predatory, prehistoric archaeocete whale from the late Eocene, approximately 41.3 to 33.9 million years ago (mya). First described in 1834, it was the first archaeocete and prehistoric whale known to science. Fossils attributed to the type species B. cetoides were discovered in the United States. They were originally thought to be of a giant reptile, hence the suffix "-saurus", Ancient Greek for "lizard". The animal was later found to be an early marine mammal, prompting attempts at renaming the creature, which failed as the rules of zoological nomenclature dictate using the original name given. Fossils were later found of the second species, B. isis, in 1904 in Egypt, Western Sahara, Morocco, Jordan, Tunisia, and Pakistan. Fossils have also been unearthed in the southeastern United States and Peru.
Basilosaurus is thought to have been common in the Tethys Ocean. With the type species B. cetoides measuring around long and weighing up to based on modern estimates, Basilosaurus was one of the largest animals of the Paleogene. It was the top predator of its environment, preying on sharks, large fish and other marine mammals, namely the dolphin-like Dorudon, which seems to have been their predominant food source. Based on the localities where its fossils are discovered, Basilosaurus would have preferred to live in the shallows, specifically in the middle to outer neritic zones of the inland sea.
Basilosaurus was at one point a wastebasket taxon before the genus slowly started getting reevaluated, with many species of different Eocene cetacean being assigned to the genus in the past. However, most are invalid or have been reclassified under a new or different genus, leaving only two confirmed species.
Basilosaurus may have been one of the first fully aquatic cetaceans, sometimes referred to as the Pelagiceti. Basilosaurus, unlike modern cetaceans, had various types of teeth–such as canines and molars–in its mouth (heterodonty), and it probably was able to chew its food, in contrast to modern cetaceans which swallow their food whole.
Taxonomic history
Etymology
The two species of Basilosaurus are B. cetoides, whose remains were discovered in the United States, and B. isis, which was discovered in Egypt. B. cetoides is the type species for the genus. The holotype of B. cetoides was found in Ouachita Parish, Louisiana. Vertebrae were sent to the American Philosophical Society by a Judge Henry Bry of Ouachita Parish, Louisiana and Judge John Creagh of Clarke County, Alabama. Both fossils ended up in the hands of the anatomist Richard Harlan, who requested more examples from Creagh. The first bones were unearthed when rain caused a hillside full of sea shells to slide. The bones were lying in a curved line "measuring upwards of four hundred feet [122 meters] in length, with intervals which were vacant." Many of these bones were used as andirons and destroyed; Bry saved the bones he could find, but was convinced more bones were still to be found on the location. Bry speculated that the bones must have belonged to a "sea monster" and supplied "a piece having the appearance of a tooth" to help determine which kind.
Harlan identified the tooth as a wedge-shaped shell and instead focused on "a vertebra of enormous dimensions" which he assumed belonged to the order "Enalio-Sauri of Conybeare", "found only in the sub-cretaceous series." He noted that some parts of the vertebra were similar to those of Plesiosaurus and skull was similar to Mosasaurus, but that they were completely different in proportions. Comparing his vertebra to those of large dinosaurs such as Megalosaurus and Iguanodon, Harlan concluded that his specimen was considerably larger—he estimated the animal to have been no less than long—and therefore suggested the name Basilosaurus, meaning "king lizard".
Harlan brought his assembled specimens (including fragments of jaw and teeth, humerus, and rib fragments) to the UK where he presented them to anatomist Richard Owen. Owen concluded that the molar teeth were two-rooted, a dental morphology unknown in fishes and reptiles, and more complex and varied than in any known reptile, and therefore that the specimen must be a mammal. Owen correctly associated the teeth with cetaceans, but he thought it was an herbivorous animal, similar to sirenians. Consequently, Owen proposed renaming the find Zeuglodon cetoides ("whale-like yoke teeth" in reference to the double-rooted teeth) and Harlan agreed.
Wadi El Hitan
Wādī al-Ḥītān () is an Egyptian sandstone formation where many early-whale skeletons were discovered. German botanist Georg August Schweinfurth discovered the first archaeocete whale in Egypt (Zeuglodon osiris, now Saghacetus osiris) in 1879. He visited the Qasr el Sagha Formation in 1884 and 1886 and missed the now famous Wadi El Hitan by a few kilometers. German paleontologist Wilhelm Barnim Dames described the material, including the type specimen of Z. osiris, a well-preserved dentary.
Hugh Beadnell, head of the Geological Survey of Egypt 1896–1906, named and described Zeuglodon isis in based on a partial mandible and several vertebrae from Wadi El Hitan in Egypt. described a skull and some vertebrae of a smaller archaeocete and named it Prozeuglodon atrox, now known today as Dorudon atrox. discovered deciduous teeth in this skull and it was then believed to be a juvenile [Pro]zeuglodon isis for decades before more complete fossils of mature Dorudon were discovered.
In the 1980s, Elwyn L. Simons and Philip D. Gingerich started to excavate at Qasr el-Sagha and Wadi El Hitan with the hope of finding material that could match archaeocete fossils from Pakistan. Since then, over 500 archaeocete skeletons have been found at these two locations, of which most are B. isis or D. atrox, several of the latter carrying bite marks assumed to be from the former. A 1990 paper described additional fossils including foot bones and speculated that the reduced hind limbs were used as copulatory guides. One thing that was noted, was that whale fossils were so common, that when a mason company looked at their newest table counter, they realized that they had created a cross section of a 40 million year old basilosaurid fossil. This find was another thing that caught the eye of Gingerich.
In 2015, a complete skeleton, the first-ever such find for Basilosaurus, was uncovered in Wadi El Hitan, preserved with the remains of its prey, including a Dorudon and several species of fish. The whale's skeleton also shows signs of scavenging or predation by large sharks such as the otodontid Carcharocles sokolovi.
Wastebasket taxa
Many dubious species have been assigned to Basilosaurus in the past which have since been invalidated or were too incomplete to determine anything.
Nomina dubia
A nomen dubium is a scientific name that is of unknown or doubtful application. There are a few documented cases of this being applied to Basilosaurus in the past.
Zeuglodon wanklyni, was a supposed species of Basilosaurus, that described in 1876 based on a skull found in the Wanklyn's Barton Cliff in the United Kingdom. This single specimen, however, quickly disappeared and has since been declared a nomen nudum or referred to as Zygorhiza wanklyni.
Zeuglodon vredense or vredensis was named in the 19th century based on a single, isolated tooth without any kind of accompanying description, and therefore declared it a nomen nudum.
Zeuglodon puschi[i] was a species that was said to come from Poland, it was named by . noted that the species is based on an incomplete vertebra of indeterminable position and, therefore, that the species is invalid.
Zeuglodon brachyspondylus was described by Johannes Peter Müller based on some vertebrae from "Zeuglodon hydrarchus", better known as Dr. Albert Koch's "Hydrarchos". , synonymized it with Pontogeneus priscus, which a 2005 study declared a nomen dubium.
Reassigned species
Basilosaurus drazindai was named by a 1997 study based on a single lumbar vertebra. Originally, the species was thought to have lived in Pakistan and the UK. It was later declared a nomen dubium by Uhen (2013), but Gingerich and Zouhri (2015) reassigned it to the genus Eocetus. This species was at one point in time concluded to be the earliest record of the genus Basilosaurus, before its reclassification.
Zeuglodon elliotsmithii, Z. sensitivius, and Z. zitteli were synonymized and grouped under the genus Saghacetus by a 1992 study.
Zeuglodon paulsoni from Ukraine (then the Russian Empire) was named by . It was synonymized with Platyosphys but is now considered nomen dubium. Gingerich and Zouhri (2015), however, maintain Platyosphys as valid.
Basilosaurus caucasicus also known as Basilosaurus caucasicum or Zeuglodon caucasicum was a species described in the Russian Empire, it gets its name from the Caucasus of where it was found in the 1890s. The fossil was reassigned to the toothed whale Microzeuglodon caucasicum.
Basilosaurus harwoodi was discovered in the Murray River near Wellington in South Australia. This species classification was controversial; T. S. Hall (1911) placed Basilosaurus harwoodi (or Zeuglodon harwoodi) in the genus Metasqualodon.
In 1906, German naturalist Othenio Abel thought fossils from the Eocene of Alabama, previously described in 1900 as being a Basilosaurus hip bone by American zoologist Frederic Augustus Lucas, represented the shoulder of a large bird similar to Gastornis, and named it Alabamornis gigantea. Lucas later countered his conclusion in 1908 as he reassigned the fossil specimens to the original conclusion of a Basilosaurus hip bone.
Description
Basilosaurus is one of the largest animals known to exist between the K–Pg extinction event 66 million years ago (mya) and around 15 million years ago when modern cetaceans began to reach enormous sizes. B. cetoides measured long and B. isis measured long. A 1998 study estimated that B. cetoides weighed more than and B. isis weighed nearly , while the 2025 study estimated that a long B. cetoides weighed . Basilosaurus is distinguished from other genera of basilosaurids by its larger body size and its more elongated posterior thoracic, lumbar, and anterior caudal vertebrae. Basilosaurus does not have the vertically oriented metapophyses seen in its closest relative the basilosaurid known as Basiloterus. Basilosaurus is considered to be the largest of archeocete whales.
Cranium
The dental formula for B. isis is . The upper and lower molars and second to fourth premolars are double-rooted and high-crowned.
The head of Basilosaurus did not have room for a melon like modern toothed whales, and the brain was smaller in comparison, as well. They are not believed to have had the echolocation capabilities nor the social dynamics of extant cetaceans.
A 2011 study concluded that the skull of Basilosaurus is asymmetrical like in modern toothed whales, and not, as previously assumed, symmetrical like in baleen whales and artiodactyls (which are closely related to cetaceans). In modern toothed whales, this asymmetry is associated with high-frequency sound production and echolocation, neither of which is thought to have been present in Basilosaurus. This probably evolved to detect sound underwater, with a fatty sound-receiving pad in the mandible.
In the skull, the inner and middle ear are enclosed by a dense tympanic bulla. The synapomorphic cetacean air sinus system is partially present in basilosaurids, including the pterygoid, peribullary, maxillary, and frontal sinuses. The periotic bone, which surrounds the inner ear, is partially isolated. The mandibular canal is large and laterally flanked by a thin bony wall, the pan bone or acoustic fenestra. These features enabled basilosaurs to hear directionally in water.
The ear of basilosaurids is more derived than those in earlier archaeocetes, such as remingtonocetids and protocetids, in the acoustic isolation provided by the air-filled sinuses inserted between the ear and the skull. The basilosaurid ear did, however, have a large external auditory meatus, strongly reduced in modern cetaceans, but, though this was probably functional, it can have been of little use under water.
Hind limbs
A individual of B. isis had hind limbs with fused tarsals and only three digits. The limited size of the limb and the absence of an articulation with the sacral vertebrae make a locomotory function unlikely. Analysis has shown that the reduced limbs could rapidly adduct between only two positions. Possible uses for the structure have been given, such as clasper-like body functions (compare to the function of pelvic spurs, the last vestiges of limbs in certain modern snakes). These limbs would have been used to guide the animals' long bodies during mating.
Spine and movement
A complete Basilosaurus skeleton was found in 2015, and several attempts have been made to reconstruct the vertebral column from partial skeletons. estimated a total of 58 vertebrae, based on two partial and nonoverlapping skeletons of B. cetoides from Alabama. More complete fossils uncovered in Egypt in the 1990s allowed a more accurate estimation: the vertebral column of B. isis has been reconstructed from three overlapping skeletons to a total of 70 vertebrae with a vertebral formula interpreted as seven cervical, 18 thoracic, 20 lumbar and sacral, and 25 caudal vertebrae. The vertebral formula of B. cetoides can be assumed to be the same.
Basilosaurus has an anguilliform (eel-like) body shape because of the elongation of the centra of the thoracic through anterior caudal vertebrae. In life, these vertebrae were filled with marrow, and because of the enlarged size, this made them buoyant. Basilosaurus probably swam predominantly in two dimensions at the sea surface, in contrast to the smaller Dorudon, which was likely a diving, three-dimensional swimmer. The skeletal anatomy of the tail suggests that a small fluke was probably present, which would have aided only vertical motion.
Similarly sized thoracic, lumbar, sacral, and caudal vertebrae imply that it moved in an anguilliform fashion, but predominantly in the vertical plane. Paleontologist Philip D. Gingerich theorized that Basilosaurus may also have moved in a very odd, horizontal anguilliform fashion to some degree, something completely unknown in modern cetaceans. The vertebrae appear to have been hollow, and likely also fluid-filled. This would imply that Basilosaurus typically functioned in only two dimensions at the ocean surface, compared with the three-dimensional habits of most other cetaceans. Judging from the relatively weak axial musculature and the thick bones in the limbs, Basilosaurus is not believed to have been capable of sustained swimming or deep diving, or terrestrial locomotion.
Paleobiology
Feeding
The cheek teeth of Basilosaurus retain a complex morphology and functional occlusion. Heavy wear on the teeth reveals that food was first chewed then swallowed. Scientists were able to estimate the bite force of Basilosaurus isis by analyzing the scarred skull bones of another species of prehistoric whale, Dorudon, and concluded that it could exert a maximum bite force of at least and could possibly exceed , roughly equivalent to the range between modern alligators and crocodiles.
Analyses of the stomach contents of B. cetoides has shown that this species fed exclusively on fish and large sharks, while bite marks on the skulls of juvenile Dorudon have been matched with the dentition of B. isis, suggesting a dietary difference between the two species, similar to that found in different populations of modern killer whales. It was probably an active predator rather than a scavenger. The discovery of juvenile Dorudon at Wadi Al Hitan bearing distinctive bite marks on their skulls indicates that B. isis would have aimed for the skulls of its victims to kill its prey, and then subsequently torn its meals apart, based on the disarticulated remains of the Dorudon skeletons. The finding further cements theories that B. isis was an apex predator that may have hunted newborn and juvenile Dorudon at Wadi Al Hitan when mothers of the latter came to give birth. The stomach contents of an elderly male B. isis not only includes Dorudon but the fish Pycnodus mokattamensis.
Paleoecology
Basilosaurus would have been the top predator of its environment. It lived in the warm tropical environment of the Eocene in areas abundant with sea grasses, such as Thalassodendron, Thalassia (also known as turtle grass) and Halodule. It would have coexisted with the dolphin-like Dorudon, the whales Cynthiacetus and Basiloterus, the primitive sirenian Protosiren, the early elephant Moeritherium, the sea turtle Puppigerus and many sharks, such as Galeocerdo alabamensis, Physogaleus, Otodus, Squatina prima, Striatolamia, Carcharocles sokolovi and Isurus praecursor.
Extinction
Basilosaurus fossil record seems to end at about 35–33.9 mya. Basilosaurus extinction coincides with the Eocene–Oligocene extinction event which happened 33.9 mya, which also resulted in the extinction of almost all other archaeocetes. The event has been attributed to volcanic activity, meteor impacts, or a sudden change in climate (such as the environment getting cooler), the latter of which might have caused changes in the ocean by disrupting oceanic circulation, thus limiting the numbers of prey for predators like Basilosaurus to feed on. Basilosaurus went extinct leaving no descendants, along with the rest of the Archaeocetes. After their extinction, the new currents and deep ocean upwelling created a new environment that favored the evolutionary diversification of modern cetaceans (Neocetes) such as early toothed and baleen whales, from more advanced Archaeocetes that evolved the traits associated with Neocetes.
Classification
Below is the phylogenetic analysis on the placement of Basilosaurus. Two subfamilies exist in Basilosauridae: Basilosaurinae which includes Basilosaurus, and Dorudontinae. These groups have been declared invalid in the past. Dorudon remains were once thought to represent juvenile Basilosaurus.
In popular culture
The species B. cetoides is the state fossil of Alabama and Mississippi. During the early 19th century, B. cetoides fossils were so common (and sufficiently large) that they were regularly used as furniture in the American South.
Basilosaurus is featured in the BBC's Walking with... series in Walking with Beasts and Sea Monsters.
In the novel Moby-Dick by Herman Melville, Ishmael cites the Basilosaurus during his studies as a possible whale fossil.
| Biology and health sciences | Cetaceans | Animals |
1039393 | https://en.wikipedia.org/wiki/Andrewsarchus | Andrewsarchus | Andrewsarchus (), meaning "Andrews' ruler", is an extinct genus of artiodactyl that lived during the Middle Eocene in what is now China. The genus was first described by Henry Fairfield Osborn in 1924 with the type species A. mongoliensis based on a largely complete cranium. A second species, A. crassum, was described in 1977 based on teeth. A mandible, formerly described as Paratriisodon, does probably belong to Andrewsarchus as well. The genus has been historically placed in the families Mesonychidae or Arctocyonidae, or was considered to be a close relative of whales. It is now regarded as the sole member of its own family, Andrewsarchidae, and may have been related to entelodonts. Fossils of Andrewsarchus have been recovered from the Middle Eocene Irdin Manha, Lushi, and Dongjun Formations of Inner Mongolia, each dated to the Irdinmanhan Asian land mammal age (Lutetian–Bartonian stages, 48–38 million years ago).
Andrewsarchus has historically been reputed as the largest terrestrial, carnivorous mammal given its skull length of , though its overall body size was probably overestimated due to inaccurate comparisons with mesonychids. Its incisors are arranged in a semicircle, similar to entelodonts, with the second rivalling the canine in size. The premolars are again similar to entelodonts in having a single cusp. The crowns of the molars are wrinkled, suggesting it was omnivorous or a scavenger. Unlike many modern scavengers, a reduced sagittal crest and flat mandibular fossa suggest that Andrewsarchus likely had a fairly weak bite force.
Taxonomy
Early history
The holotype of Andrewsarchus mongoliensis is a mostly complete cranium (specimen number AMNH-VP 20135). It was recovered from the lower Irdin Manha Formation of Inner Mongolia during a 1923 palaeontological expedition conducted by the American Museum of Natural History of New York. Its discoverer was a local assistant, Kan Chuen-pao, also known as "Buckshot". It was initially identified by Walter W. Granger as the skull of an Entelodon. A drawing of the skull was sent to the museum, where it was identified by William Diller Matthew as belonging to "the primitive Creodonta of the family Mesonychidae". The specimen itself arrived at the museum and was described by Osborn in 1924. Its generic name honours Roy Chapman Andrews, the leader of the expedition, with the Ancient Greek archos (ἀρχός, "ruler") added to his surname.
A second species of Andrewsarchus, A. crassum, was named by Ding Suyin and colleagues in 1977 on the basis of IVPP V5101, a pair of teeth (the second and third lower premolars) recovered from the Dongjun Formation of Guangxi.
In the 1957, Zhou Mingzhen and colleagues recovered a mandible, a fragmentary maxilla, and several isolated teeth from the Lushi Formation of Henan, China, which correlates to the Irdin Manha Formation. The maxilla belonged to a skull that was crushed beyond recognition; it is likely from the same individual as the mandible. Zhou described it in 1959 as Paratriisodon henanensis, and assigned it to Arctocyonidae. He further classified it as part of the subfamily Triisodontinae (now the family Triisodontidae) based on close similarities of the molars and premolars to those of Triisodon. A second species, P. gigas, was named by Zhou and colleagues in 1973 for a molar also from the Lushi Formation. Three molars and an incisor from the Irdin Manha Formation were later referred to P. gigas. Comparisons between the two genera were drawn as far back as 1969, when Frederick Szalay suggested that they either evolved from the same arctocyonid ancestors or that they were an example of convergent evolution. Paratriisodon was first properly synonymised with Andrewsarchus by Leigh Van Valen in 1978, who did so without explanation. Regardless, their synonymy was upheld by Maureen O'Leary in 1998, based on similarities between the molars and premolars of the two genera and their comparable body sizes.
Classification
Andrewsarchus was initially regarded as a mesonychid, and Paratriisodon as an arctocyonid. In 1995, the former became the sole member of its own subfamily, Andrewsarchinae, within Mesonychia. The subfamily was elevated to family level by Philip D. Gingerich in 1998, who tentatively assigned Paratriisodon to it. In 1988, Donald Prothero and colleagues recovered Andrewsarchus as the sister taxon to whales. It has since been recovered as a more basal member of Cetancodontamorpha, most closely related to entelodonts, hippos, and whales. In 2023, Yu and colleagues conducted a phylogenetic analysis of ungulates, with a particular focus on entelodontid artiodactyls. Andrewsarchus was recovered as part of a clade consisting of itself, Achaenodon, Erlianhyus, Protentelodon, Wutuhyus, and Entelodontidae. It was found to be most closely related to Achaenodon and Erlianhyus, with which it formed a polytomy. A cladogram based on their phylogeny is reproduced below:
Description
When first describing Andrewsarchus, Osborn believed it to be the largest terrestrial, carnivorous mammal. Based on the length of the A. mongoliensis holotype skull, and using the proportions of Mesonyx, he estimated a total body length of and a body height of . However, considering cranial and dental similarities with entelodonts, Frederick Szalay and Stephen Jay Gould proposed that it had proportions less like mesonychids and more like them, and thus that Osborn's estimates were likely inaccurate.
Skull
The holotype skull of Andrewsarchus has a total length of , and is wide at the zygomatic arches. The snout is greatly elongated, measuring one-and-a-half times the length of the basicranium, and the portion of the snout in front of the canines resembles that of entelodonts. Unlike entelodonts, however, the postorbital bar is incomplete. The sagittal crest is reduced, and the mandibular fossa is relatively flat. Together, these attributes suggest a weak temporalis muscle and a fairly weak bite force. The hard palate is long and narrow. The mandibular fossa is also offset laterally and ventrally from the basicranium, similar to the condition seen in mesonychids. The mandible itself is long and shallow, characterised by a straight and relatively shallow horizontal ramus. The masseteric fossa, the depression on the mandible to which the masseter attaches, is shallow. Symphyseal contact between the two mandibles is limited.
Dentition
The holotype cranium of Andrewsarchus demonstrates the typical placental tooth formula, of three incisors, one canine, four premolars and three molars per side, though it is not clear whether the same applies to the mandible. The upper incisors are arranged in a semicircle in front of the canines, a trait that is shared with entelodonts. The second incisor is enlarged, and is almost the size of the canines. This is partly because, while the canines were originally described as being "of enormous size", they are relatively small in proportion to the rest of the dentition. The upper premolars are elongate and consist of a single cusp, resembling those of entelodonts. The fourth premolar retains the protocone, though in a vestigial form. Their roots are not confluent and lack a dentine platform, which are both likely to be adaptations to prolong the tooth's functional life after crown abrasion. The first molar is the smallest. The second is the widest, but has been heavily worn since fossilisation. The third has largely avoided that wear. The premolars and molars have wrinkled crowns, similar to the condition seen in suids and other omnivorous artiodactyls. The tooth structure of the mandible (IVPP V5101) is difficult to determine, as nearly all are worn or broken. All of the right mandible's teeth are preserved save for the first premolar, which is instead preserved on the left mandible. The lower canine and the first premolar both point forwards. The third molar is large, with talonids that have two cusps.
Diet
In his paper describing Andrewsarchus, Osborn suggested that it may have been omnivorous based on comparisons with entelodonts. This conclusion was supported by Szalay and Gould, who use the heavily wrinkled crowns of the molars and premolars as supporting evidence, as well as the close phylogenetic relationship between Andrewsarchus and entelodonts. R.M. Joeckel, in 1990, suggested that it was likely an "omnivore-scavenger", and that it was an ecological analogue to entelodonts. Lars Werdelin further suggested that it was a scavenger, or that it might have preyed on brontotheres.
Palaeoecology
For much of the Eocene, a hothouse climate with humid, tropical environments with consistently high precipitations prevailed. Modern mammalian orders including the Perissodactyla, Artiodactyla, and Primates (or the suborder Euprimates) appeared already by the Early Eocene, diversifying rapidly and developing dentitions specialized for folivory. The omnivorous forms mostly either switched to folivorous diets or went extinct by the Middle Eocene (Lutetian–Bartonian, 48–38 million years ago) along with the archaic "condylarths". By the Late Eocene (Priabonian, 38–34 million years ago), most of the ungulate form dentitions shifted from bunodont cusps to cutting ridges (i.e. lophs) for folivorous diets.
The Irdin Manha Formation, from which the holotype of Andrewsarchus was recovered, consists of Irdinmanhan strata dated to the Middle Eocene. Andrewsarchus mongoliensis comes from the IM-1 locality, dated to the lower Irdinmanhan, from which the hyaenodontine Propterodon, the mesonychid Harpagolestes, at least three unnamed mesonychids, the artiodactyl Erlianhyus, the perissodactyls Deperetella and Lophialetes, the omomyid Tarkops, the glirian Gomphos, the rodent Tamquammys, and various indeterminate glirians are also known. The Lushi Formation, from which the Paratriisodon henanensis specimen was recovered, was deposited at around the same time as the Irdin Manha Formation. The mesonychid Mesonyx, the pantodont Eudinoceras, the dichobunid Dichobune, the helohyid Gobiohyus, the brontotheres Rhinotitan and Microtitan, the perissodactyls Amynodon and Lophialetes, the ctenodactylid Tsinlingomys, and the lagomorph Lushilagus have been identified from the Lushi Formation. The Dongjun Formation, from which A. crassum originates, is similarly Middle Eocene. It preserves the nimravid Eusmilus, the anthracotheriid Probrachyodus, the pantodont Eudinoceras, the brontotheres Metatelmatherium and cf. Protitan, the deperetellids Deperetella and Teleolophus, the hyracodontid Forstercooperia, the rhinocerotids Ilianodon and Prohyracodon, and the amynodonts Amynodon, Gigantamynodon, and Paramnyodon.
| Biology and health sciences | Other artiodactyla | Animals |
1040920 | https://en.wikipedia.org/wiki/Sigma%20bond | Sigma bond | In chemistry, sigma bonds (σ bonds) or sigma overlap are the strongest type of covalent chemical bond. They are formed by head-on overlapping between atomic orbitals along the internuclear axis. Sigma bonding is most simply defined for diatomic molecules using the language and tools of symmetry groups. In this formal approach, a σ-bond is symmetrical with respect to rotation about the bond axis. By this definition, common forms of sigma bonds are s+s, pz+pz, s+pz and dz2+dz2 (where z is defined as the axis of the bond or the internuclear axis).
Quantum theory also indicates that molecular orbitals (MO) of identical symmetry actually mix or hybridize. As a practical consequence of this mixing of diatomic molecules, the wavefunctions s+s and pz+pz molecular orbitals become blended. The extent of this mixing (or hybridization or blending) depends on the relative energies of the MOs of like symmetry.
For homodiatomics (homonuclear diatomic molecules), bonding σ orbitals have no nodal planes at which the wavefunction is zero, either between the bonded atoms or passing through the bonded atoms. The corresponding antibonding, or σ* orbital, is defined by the presence of one nodal plane between the two bonded atoms.
Sigma bonds are the strongest type of covalent bonds due to the direct overlap of orbitals, and the electrons in these bonds are sometimes referred to as sigma electrons.
The symbol σ is the Greek letter sigma. When viewed down the bond axis, a σ MO has a circular symmetry, hence resembling a similarly sounding "s" atomic orbital.
Typically, a single bond is a sigma bond while a multiple bond is composed of one sigma bond together with pi or other bonds. A double bond has one sigma plus one pi bond, and a triple bond has one sigma plus two pi bonds.
Polyatomic molecules
Sigma bonds are obtained by head-on overlapping of atomic orbitals. The concept of sigma bonding is extended to describe bonding interactions involving overlap of a single lobe of one orbital with a single lobe of another. For example, propane is described as consisting of ten sigma bonds, one each for the two C−C bonds and one each for the eight C−H bonds.
Multiple-bonded complexes
Transition metal complexes that feature multiple bonds, such as the dihydrogen complex, have sigma bonds between the multiple bonded atoms. These sigma bonds can be supplemented with other bonding interactions, such as π-back donation, as in the case of W(CO)3(PCy3)2(H2), and even δ-bonds, as in the case of chromium(II) acetate.
Organic molecules
Organic molecules are often cyclic compounds containing one or more rings, such as benzene, and are often made up of many sigma bonds along with pi bonds. According to the sigma bond rule, the number of sigma bonds in a molecule is equivalent to the number of atoms plus the number of rings minus one.
Nσ = Natoms + Nrings − 1
This rule is a special-case application of the Euler characteristic of the graph which represents the molecule.
A molecule with no rings can be represented as a tree with a number of bonds equal to the number of atoms minus one (as in dihydrogen, H2, with only one sigma bond, or ammonia, NH3, with 3 sigma bonds). There is no more than 1 sigma bond between any two atoms.
Molecules with rings have additional sigma bonds, such as benzene rings, which have 6 C−C sigma bonds within the ring for 6 carbon atoms. The anthracene molecule, C14H10, has three rings so that the rule gives the number of sigma bonds as 24 + 3 − 1 = 26. In this case there are 16 C−C sigma bonds and 10 C−H bonds.
This rule fails in the case of molecules which, when drawn flat on paper, have a different number of rings than the molecule actually has - for example, Buckminsterfullerene, C60, which has 32 rings, 60 atoms, and 90 sigma bonds, one for each pair of bonded atoms; however, 60 + 32 - 1 = 91, not 90. This is because the sigma rule is a special case of the Euler characteristic, where each ring is considered a face, each sigma bond is an edge, and each atom is a vertex. Ordinarily, one extra face is assigned to the space not inside any ring, but when Buckminsterfullerene is drawn flat without any crossings, one of the rings makes up the outer pentagon; the inside of that ring is the outside of the graph. This rule fails further when considering other shapes - toroidal fullerenes will obey the rule that the number of sigma bonds in a molecule is exactly the number of atoms plus the number of rings, as will nanotubes - which, when drawn flat as if looking through one from the end, will have a face in the middle, corresponding to the far end of the nanotube, which is not a ring, and a face corresponding to the outside.
| Physical sciences | Bond structure | Chemistry |
1041031 | https://en.wikipedia.org/wiki/Dracunculiasis | Dracunculiasis | Dracunculiasis, also called Guinea-worm disease, is a parasitic infection by the Guinea worm, Dracunculus medinensis. A person becomes infected by drinking water contaminated with Guinea-worm larvae that reside inside copepods (a type of small crustacean). Stomach acid digests the copepod and releases the Guinea worm, which penetrates the digestive tract and escapes into the body. Around a year later, the adult female migrates to an exit site – usually the lower leg – and induces an intensely painful blister on the skin. Eventually, the blister bursts, creating a painful wound from which the worm gradually emerges over several weeks. The wound remains painful throughout the worm's emergence, disabling the affected person for the three to ten weeks it takes the worm to emerge.
There is no medication to treat or prevent dracunculiasis. Instead, the mainstay of treatment is the careful wrapping of the emerging worm around a small stick or gauze to encourage and speed up its exit. Each day, a few more centimeters of the worm emerge, and the stick is turned to maintain gentle tension. Too much tension can break and kill the worm in the wound, causing severe pain and swelling. Dracunculiasis is a disease of extreme poverty, occurring in places with poor access to clean drinking water. Prevention efforts center on filtering drinking water to remove copepods, as well as public education campaigns to discourage people from soaking affected limbs in sources of drinking water, as this allows the worms to spread their larvae.
Accounts consistent with dracunculiasis appear in surviving documents from physicians of Greco-Roman antiquity. In the 19th and early 20th centuries, dracunculiasis was widespread across much of Africa and South Asia, affecting as many as 48 million people per year. The effort to eradicate dracunculiasis began in the 1980s following the successful eradication of smallpox. By 1995, every country with endemic dracunculiasis had established a national eradication program. In the ensuing years, dracunculiasis cases have dropped precipitously, with 14 cases reported worldwide in 2023. Since 1986, 14 previously endemic countries have eradicated dracunculiasis, leaving the disease endemic in six: Angola, Central African Republic, Chad, Ethiopia, Mali, and South Sudan. If the eradication program succeeds, dracunculiasis will become the second human disease eradicated, after smallpox. D. medinensis can also infect dogs, cats, and baboons, though non-human cases are also falling due to eradication efforts. Other Dracunculus species cause dracunculiasis in reptiles worldwide and in mammals in the Americas.
Cause
Dracunculiasis is caused by infection with the roundworm Dracunculus medinensis. D. medinensis larvae reside within small aquatic crustaceans called copepods. Humans typically get infected when they unintentionally ingest copepods while drinking water. In some cases, infected copepods are consumed by fish or frogs, which are then consumed by humans or other animals, passing along the D. medinensis larvae. During digestion the copepods die, releasing the D. medinensis larvae. The larvae exit the digestive tract by penetrating the stomach and intestine, taking refuge in the abdomen or retroperitoneal space (behind the organs near the back of the abdomen). Over the next two to three months the larvae develop into adult male and female worms. The male remains small at long and wide; the female is comparatively large, often over long and wide. Once the worms reach their adult size they mate, and the male dies. Over the ensuing months, the female migrates to connective tissue or along bones, and continues to develop.
About a year after the initial infection, the female migrates to the skin, forms an ulcer, and emerges. When the wound touches fresh water, the female spews a milky-white substance containing hundreds of thousands of larvae into the water. Over the next several days as the female emerges from the wound, she can continue to discharge larvae into the surrounding water. The larvae are eaten by copepods, and after two to three weeks of development, they are infectious to humans again.
Signs and symptoms
The first signs of dracunculiasis occur around a year after infection, as the full-grown female worm prepares to leave the infected person's body. As the worm migrates to its exit site – typically the lower leg – some people have allergic reactions, including hives, fever, dizziness, nausea, vomiting, and diarrhea. Upon reaching its destination, the worm forms a fluid-filled blister under the skin. Over 1–3 days, the blister grows larger, begins to cause severe burning pain, and eventually bursts, leaving a small open wound. The wound remains intensely painful as the worm slowly emerges over several weeks to months.
If an affected person submerges the wound in water, the worm spews a white substance releasing its larvae into the water. As the worm emerges, the open blister often becomes infected with bacteria, resulting in redness and swelling; the formation of abscesses; or, in severe cases, gangrene, sepsis, or tetanus. When the secondary infection is near a joint (typically the ankle), the damage to the joint can result in stiffness, arthritis, or contractures.
Infected people commonly harbor multiple worms – on average 1.8 worms per person, but as many as 40 – which will emerge from separate blisters at the same time. 90% of worms emerge from the legs or feet. However, worms can emerge from anywhere on the body.
Diagnosis
Dracunculiasis is diagnosed by visual examination – the thin white worm emerging from the blister is unique to this disease. Dead worms sometimes calcify and can be seen in the subcutaneous tissue by X-ray.
Treatment
There is no medicine to kill D. medinensis or prevent it from causing disease once within the body. Instead, treatment focuses on slowly and carefully removing the worm from the wound over days to weeks. Once the blister bursts and the worm begins to emerge, the wound is soaked in a bucket of water, allowing the worm to empty itself of larvae away from a source of drinking water. As the first part of the worm emerges, it is typically wrapped around a piece of gauze or a stick to maintain steady tension on the worm, encouraging its exit. Each day, several centimeters of the worm emerge from the blister, and the stick is wound to maintain tension. This is repeated daily until the full worm emerges, typically within a month. If too much pressure is applied, the worm can break and die, leading to severe swelling and pain at the site of the ulcer.
Treatment for dracunculiasis also includes regular wound care to avoid infection of the open ulcer. The US Centers for Disease Control and Prevention (CDC) recommends cleaning the wound before the worm emerges. Once the worm begins to exit the body, the CDC recommends daily wound care: cleaning the wound, applying antibiotic ointment, and replacing the bandage with fresh gauze. Painkillers like aspirin or ibuprofen can help ease the pain of the worm's exit.
Outcomes
Dracunculiasis is a debilitating disease, causing substantial disability in around half of those infected. People with worms emerging can be disabled for the three to ten weeks it takes the worms to fully emerge. When worms emerge near joints, inflammation or infection of the affected area can result in permanent stiffness, pain, or destruction of the joint. Some people with dracunculiasis have continuing pain for 12 to 18 months after the worm has emerged. Around 1% of dracunculiasis cases result in death from secondary infections of the wound.
When dracunculiasis was widespread, it often affected entire villages at once. Outbreaks occurring during planting and harvesting seasons severely impaired a community's agricultural operations – earning dracunculiasis the descriptor "empty granary disease" in some places. Communities affected by dracunculiasis also see reduced school attendance as children of affected parents must take over farm or household duties, and affected children may be physically prevented from walking to school for weeks.
Infection does not create immunity, so people can repeatedly experience dracunculiasis throughout their lives.
Prevention
There is no vaccine for dracunculiasis, and once infected with D. medinensis there is no way to prevent the disease from running its full course. Consequently, efforts to reduce the burden of dracunculiasis focus on preventing the transmission of D. medinensis from person to person. A mainstay of eradication efforts is the improvement of drinking water. Nylon filters, finely woven cloth, or specialized filter straws can all remove copepods from drinking water, eliminating transmission risk. Water sources can also be treated with temephos, which kills copepods, and contaminated water can be treated via boiling. Where possible, open sources of drinking water are replaced by deep wells that can serve as new sources of clean water. Public education campaigns inform people in affected areas how dracunculiasis spreads and encourage those with the disease to avoid soaking their wounds in bodies of water that are used for drinking.
Epidemiology
Dracunculiasis is now rare, with 14 cases reported worldwide in 2023 and 13 in 2022. This is down from 27 cases in 2020 and dramatically less than the estimated 3.5 million annual cases in 20 countries in 1986 – the year the World Health Assembly called for dracunculiasis's eradication. Dracunculiasis remains endemic in three countries: Chad, Mali, and South Sudan.
Dracunculiasis is a disease of extreme poverty, occurring in places where there is poor access to clean drinking water. Cases tend to be split roughly equally between males and females and can occur in all age groups. Within a given place, dracunculiasis risk is linked to occupation; people who farm or fetch drinking water are most likely to be infected.
When dracunculiasis was widespread, it had a seasonal cycle, though the timing varied by location. Along the Sahara desert's southern edge, cases peaked during the mid-year rainy season (May–October) when stagnant water sources were more abundant. Along the Gulf of Guinea, cases were more common during the dry season (October–March) when flowing water sources dried up.
History
Diseases consistent with the effects of dracunculiasis are referenced by writers throughout antiquity. Plutarch's Symposiacon refers to a (lost) description by the 2nd-century BCE writer Agatharchides concerning a "hitherto unheard-of disease" in which "small worms issue from [people's] arms and legs ... insinuating themselves between the muscles [to] give rise to horrible sufferings". Greco-Roman and Persian physicians, including Galen, Rhazes, and Avicenna, also wrote of diseases consistent with dracunculiasis; though there was some disagreement as to the nature of the disease, with some attributing it to a worm, while others considered it to be a corrupted part of the body emerging.
Some have proposed links between dracunculiasis and other prominent ancient texts and symbols. In a 1674 treatise on dracunculiasis, Georg Hieronymous Velschius ascribed serpentine figures in several ancient icons to Dracunculus, including Greek sculptures, signs of the zodiac, Arabic lettering, and the Rod of Asclepius, a common symbol of the medical profession. Similarly, parasitologist Friedrich Küchenmeister proposed in 1855 that the "fiery serpents" that plague the Hebrews in the Old Testament represented dracunculiasis. In 1959, parasitologist Reinhard Hoeppli proposed that a prescription in the Ebers papyrus – a medical text written around 1500 BCE – referred to the removal of a Guinea worm, an identification endorsed ten years later by the physician and Egyptologist Paul Ghalioungui; this would make the Ebers papyrus the oldest known description of the disease.
Carl Linnaeus included the Guinea worm in his 1758 edition of Systema Naturae, naming it Gordius medinensis. The name medinensis refers to the worm's longstanding association with the Arabian Peninsula city of Medina, with Avicenna writing in his The Canon of Medicine (published in 1025) "The disease is commonest at Medina, whence it takes its name". In Johann Friedrich Gmelin's 1788 update of Linnaeus' Systema Naturae, Gmelin renamed the worm Filaria medinensis, leaving Gordius for free-living worms. Henry Bastian authored the first detailed description of the worm itself, published in 1863. The following year, in his book Entozoa, Thomas Spencer Cobbold used the name Dracunculus medinensis, which was enshrined as the official name by the International Commission on Zoological Nomenclature in 1915. Despite longstanding knowledge that the worm was associated with water, the lifecycle of D. medinensis was the topic of protracted debate. Alexei Pavlovich Fedchenko filled a major gap with his 1870 publication describing that D. medinensis larvae can infect and develop inside copepods. The next step was shown by Robert Thomson Leiper, who described in a 1907 paper that monkeys fed D. medinensis–infected copepods developed mature Guinea worms, while monkeys directly fed D. medinensis larvae did not.
In the 19th and 20th centuries, dracunculiasis was widespread across nearly all of Africa and South Asia, though no exact case counts exist from the pre-eradication era. In a 1947 article in the Journal of Parasitology, Norman R. Stoll used rough estimates of populations in endemic areas to suggest that there could have been as many as 48 million cases of dracunculiasis per year. In 1976, the WHO estimated the global burden at 10 million cases per year. Ten years later, as the eradication effort was beginning, the WHO estimated 3.5 million cases per year worldwide.
Eradication
The campaign to eradicate dracunculiasis began at the urging of the CDC in 1980. Following smallpox eradication (last case in 1977; eradication certified in 1981), dracunculiasis was considered an achievable eradication target since it was preventable with only behavioral changes and less common than many similar diseases of poverty. In 1981, the steering committee for the United Nations International Drinking Water Supply and Sanitation Decade (a program to improve global drinking water from 1981 to 1990) adopted the goal of eradicating dracunculiasis as part of their efforts. The following June, an international meeting termed "Workshop on Opportunities for Control of Dracunculiasis" concluded that dracunculiasis could be eradicated through public education, drinking water improvement, and larvicide treatments. In response, India began its national eradication program in 1983.
In 1986, the 39th World Health Assembly issued a statement endorsing dracunculiasis eradication and calling on member states to craft eradication plans. The same year, the Carter Center began collaborating with the government of Pakistan to initiate its national program, which then launched in 1988. By 1996, national eradication programs had been launched in every country with endemic dracunculiasis: Ghana and Nigeria in 1989; Cameroon in 1991; Togo, Burkina Faso, Senegal, and Uganda in 1992; Benin, Mauritania, Niger, Mali, and Côte d'Ivoire in 1993; Sudan, Kenya, Chad, and Ethiopia in 1994; Yemen and the Central African Republic in 1995.
Each national eradication program had three phases. The first phase consisted of a nationwide search to identify the extent of dracunculiasis transmission and develop national and regional plans of action. The second phase involved the training and distribution of staff and volunteers to provide public education village-by-village, surveil for cases, and deliver water filters. This continued and evolved as needed until the national burden of disease was very low. Then, in a third phase, programs intensified surveillance efforts to identify each case within 24 hours of the worm emerging and preventing the person from contaminating drinking water supplies. Most national programs offered voluntary in-patient centers, where those affected could stay and receive food and care until their worms were removed.
In May 1991, the 44th World Health Assembly called for an international certification system to verify dracunculiasis eradication country-by-country. To this end, in 1995 the WHO established the International Commission for the Certification of Dracunculiasis Eradication (ICCDE). Once a country reports zero cases of dracunculiasis for a calendar year, the ICCDE considers that country to have interrupted Guinea worm transmission, and is then in the "precertification phase". If the country reports zero cases in each of the next three calendar years, the ICCDE sends a team to the country to assess the country's disease surveillance systems and to verify the country's reports. The ICCDE can then formally recommend the WHO Director-General certify a country as free of dracunculiasis.
Since the initiation of the global eradication program, the ICCDE has certified 15 of the original endemic countries as having eradicated dracunculiasis: Pakistan in 1997; India in 2000; Senegal and Yemen in 2004; the Central African Republic and Cameroon in 2007; Benin, Mauritania, and Uganda in 2009; Burkina Faso and Togo in 2011; Côte d'Ivoire, Niger, and Nigeria in 2013; and Ghana in 2015. In 2020, the 76th World Health Assembly endorsed a new guidance plan, the Roadmap for Neglected Tropical Diseases 2021–2030, which sets a 2027 target for eradication of dracunculiasis, allowing certification by the end of 2030.
Other animals
In addition to humans, D. medinensis can infect domestic dogs and cats and wild olive baboons. Infections of domestic dogs have been particularly common in Chad, where they helped reignite dracunculiasis transmission in 2010. Animals are thought to become infected by eating a transport host, likely a fish or amphibian. As with humans, control efforts have focused on preventing infection by encouraging people in affected areas to bury fish entrails, as well as to identify and tie up dogs and cats with emerging worms so that they cannot access drinking water sources until after the worms have emerged. Animal infections are rapidly falling, with 2,000 recorded infections in 2019, 1,601 in 2020, and 863 in 2021. Domestic ferrets can be infected with D. medinensis in laboratory settings, and have been used as an animal disease model for human dracunculiasis.
Other Dracunculus species can infect snakes, turtles, and other reptiles. Animal infections are most widespread in snakes, with nine different species of Dracunculus described in snakes in the United States, Brazil, India, Vietnam, Australia, Papua New Guinea, Benin, Madagascar, and Italy. The only other reptiles affected are snapping turtles, with cases of infected common snapping turtles described in several US states and a single infected South American snapping turtle described in Costa Rica. Infections of mammals are limited to the Americas. Raccoons in the US and Canada are most widely impacted, particularly by D. insignis; however, Dracunculus worms have also been reported in American skunks, coyotes, foxes, opossums, domestic dogs, domestic cats, and (rarely) muskrats and beavers.
| Biology and health sciences | Helminthic diseases and infestations | Health |
1042259 | https://en.wikipedia.org/wiki/Landline | Landline | A landline is a telephone connection that uses metal wires or optic fibre from the owner's premises also referred to as: POTS, Twisted pair, telephone line or public switched telephone network (PSTN).
Landline services are traditionally provided via an analogue copper wire to a telephone exchange. Landline service is usually distinguished from other more modern forms of telephone services which use Internet Protocol based services over optical fiber (Fiber-to-the-x) or other broadband services (VDSL/Cable) using Voice over IP, although sometimes modern fixed phone services delivered over a fixed internet connection are sometimes referred to as a landline (non-cellular service).
Characteristics
Landline service is typically provided through the outside plant of a telephone company's central office, or wire center. The outside plant comprises tiers of cabling between distribution points in the exchange area, so that a single pair of copper wire, or an optical fiber, reaches each subscriber location, such as a home or office, at the network interface. Customer premises wiring extends from the network interface ("NID") to the location of one or more telephones inside the premises.
A subscriber's telephone connected to a landline can be hard-wired or cordless and typically refers to the operation of wireless devices or systems in fixed locations such as homes. Fixed wireless devices usually derive their electrical power from the utility mains electricity, unlike mobile wireless or portable wireless, which tend to be battery-powered. Although mobile and portable systems can be used in fixed locations, efficiency and bandwidth are compromised compared with fixed systems. Mobile or portable, battery-powered wireless systems can be used as emergency backups for fixed systems in case of a power blackout or natural disaster.
Other aspects of landline is the ability to carry high-speed internet popularly known as Digital subscriber line (DSL) which links back to the digital subscriber line access multiplexer (DSLAM) within the central office, T-1/T-3, or ISDN.
Usage and statistics
In 2003, the CIA World Factbook reported approximately 1.263 billion main telephone lines worldwide. China had more than any other country, at 350 million, and the United States was second with 268 million. The United Kingdom had 23.7 million residential fixed home phones.
A 2013 International Telecommunication Union report showed that the total number of fixed-telephone subscribers in the world was about 1.26 billion.
In many parts of the world, including Africa and India, the growth in mobile phone usage has outpaced that of landlines. In the United States, while 45.9 percent of households still had landlines as of 2017, more than half had only mobile phones. This trend is similar in Canada, where more than one in five households used mobile phones as their only source for telephone service in 2013. However, voice over IP (VoIP) services offer an alternative to traditional landlines, allowing numbers to remain in use without being tied to a physical location, making them more adaptable to modern ways of working. The FCC maintains both landline and Voice over IP subscriber numbers to monitor long term trends in usage.
Successors
2000s
In many countries, landline service has not been readily available to most people. In some countries in Africa, the rise in cell phones has outpaced growth in landline service. Between 1998 and 2008, Africa added only 2.4 million landlines. In contrast, between 2000 and 2008, cell phone use rose from fewer than 2 in 100 people to 33 out of 100. There has also been a substantial decline of landline phones in the Indian subcontinent, in urban and even more in rural areas.
In the early 21st century, installations of landline telephones has declined due to the advancement of mobile network technology and the obsolescence of copper wire networking. It is more difficult to install landline copper wires to every user than it is to install transmission towers for mobile service that many people can connect to. Some predict that these metallic networks will be deemed completely out of date and replaced by more efficient broadband and fiber optic landline connections extending to rural areas and places where telecommunication was much more sparse. In 2009, The Economist wrote "At current rates the last landline in America will be disconnected sometime in 2025."
In 2004, only about 45% of people in the United States between the ages of 12 and 17 owned cell phones. At that time, most had to rely on landline telephones. Just 4 years later, that percentage climbed to about 71%. That same year, 2008, about 77% of adults owned a mobile phone.
2010s
In the year 2013, 91% of adults in the United States owned a mobile phone. Almost 60% of those with a mobile had a smartphone. A National Health Interview Survey of 19,956 households by the Centers for Disease Control and Prevention released May 4, 2017 showed 45.9 percent of U.S. households still had landlines, while 50.8 percent had only cell phones. Over 39 percent had both.
In Canada, more than one in five of households use cell phones as their only source for telephone service. In 2013, statistics showed that 21% of households claimed to only use cellular phones. Households that are owned by members under the age of 35 have a considerably higher percentage of exclusive cell phone use. In 2013, 60% of young household owners claimed to only use cell phones. In 2019, 54% of Canadian households and 86.4% of German households. had a landline telephone.
2020s
In June 2020, it was reported that 60% of Australian adults used only mobile phones, with no landline. In 2021, only 14.5% of Australian and 29.4% of American households used landline at home. In contrast, 73% of UK households still had a landline connection in 2020 though this could be in part explained by broadband packaging practices. In 2022, 82.9% of German households had at least one landline phone while 73 percent of U.S. households had only a cell phone, 25 percent had a landline and cell service, and 1 percent had only a landline.
Estonia and the Netherlands have retired the legacy parts of the public switched telephone network (PSTN). In the United Kingdom, the analogue copper landline network is due to be terminated in 2025. The VoIP replacement is known as "Digital Voice" (on a BT service) in the UK. France, Germany and Japan are also in the process of replacing theirs.
By means of porting, voice over IP services can host landline numbers previously hosted on traditional fixed telephone networks. VoIP services can be used anywhere an internet connection is available on many devices including Smartphones, giving great flexibility to where calls may be answered and thus facilitating remote, mobile and home working, for example. VoIP porting allows landline numbers to remain in use, whilst freeing them from actual landlines tied to one location. This is useful where landline numbers are believed to be preferred by callers, or where it is preferable that legacy landline numbers remain connected.
| Technology | Telecommunications | null |
1042263 | https://en.wikipedia.org/wiki/Poynting%27s%20theorem | Poynting's theorem | In electrodynamics, Poynting's theorem is a statement of conservation of energy for electromagnetic fields developed by British physicist John Henry Poynting. It states that in a given volume, the stored energy changes at a rate given by the work done on the charges within the volume, minus the rate at which energy leaves the volume. It is only strictly true in media which is not dispersive, but can be extended for the dispersive case.
The theorem is analogous to the work-energy theorem in classical mechanics, and mathematically similar to the continuity equation.
Definition
Poynting's theorem states that the rate of energy transfer per unit volume from a region of space equals the rate of work done on the charge distribution in the region, plus the energy flux leaving that region.
Mathematically:
where:
is the rate of change of the energy density in the volume.
∇•S is the energy flow out of the volume, given by the divergence of the Poynting vector S.
J•E is the rate at which the fields do work on charges in the volume (J is the current density corresponding to the motion of charge, E is the electric field, and • is the dot product).
Integral form
Using the divergence theorem, Poynting's theorem can also be written in integral form:
where
S is the energy flow, given by the Poynting Vector.
is the energy density in the volume.
is the boundary of the volume. The shape of the volume is arbitrary but fixed for the calculation.
Continuity equation analog
In an electrical engineering context the theorem is sometimes written with the energy density term u expanded as shown. This form resembles the continuity equation:
,
where
ε0 is the vacuum permittivity and μ0 is the vacuum permeability.
is the density of reactive power driving the build-up of electric field,
is the density of reactive power driving the build-up of magnetic field, and
is the density of electric power dissipated by the Lorentz force acting on charge carriers.
Derivation
The rate of work done by the electromagnetic field on the infintesimal charge is given by the Lorentz Force Law as:
(the dot product because from the definition of cross product the cross product of v and B is perpendicular to v.
Where is the volume charge density and is the current density at the point and time where is the velocity of the charge dq.
The rate of work done on the whole charges in the volume V will be the volume integral
By Ampère's circuital law:
(Note that the H and D forms of the magnetic and electric fields are used here. The B and E forms could also be used in an equivalent derivation.)
Substituting this into the expression for rate of work gives:
Using the vector identity :
By Faraday's Law:
giving:
Continuing the derivation requires the following assumptions:
the charges are moving in a medium which is not dispersive.
the total electromagnetic energy density, even for time-varying fields, is given by
It can be shown that:
and
and so:
Returning to the equation for rate of work,
Since the volume is arbitrary, this can be cast in differential form as:
where is the Poynting vector.
Poynting vector in macroscopic media
In a macroscopic medium, electromagnetic effects are described by spatially averaged (macroscopic) fields. The Poynting vector in a macroscopic medium can be defined self-consistently with microscopic theory, in such a way that the spatially averaged microscopic Poynting vector is exactly predicted by a macroscopic formalism. This result is strictly valid in the limit of low-loss and allows for the unambiguous identification of the Poynting vector form in macroscopic electrodynamics.
Alternative forms
It is possible to derive alternative versions of Poynting's theorem. Instead of the flux vector as above, it is possible to follow the same style of derivation, but instead choose , the Minkowski form , or perhaps . Each choice represents the response of the propagation medium in its own way: the form above has the property that the response happens only due to electric currents, while the form uses only (fictitious) magnetic monopole currents. The other two forms (Abraham and Minkowski) use complementary combinations of electric and magnetic currents to represent the polarization and magnetization responses of the medium.
Modification
The derivation of the statement is dependent on the assumption that the materials the equation models can be described by a set of susceptibility properties that are linear, isotropic, homogenous and independent of frequency. The assumption that the materials have no absorption must also be made. A modification to Poynting's theorem to account for variations includes a term for the rate of non-Ohmic absorption in a material, which can be calculated by a simplified approximation based on the Drude model.
Complex Poynting vector theorem
This form of the theorem is useful in Antenna theory, where one has often to consider harmonic fields propagating in the space.
In this case, using phasor notation, and .
Then the following mathematical identity holds:
where is the current density.
Note that in free space, and are real, thus,
taking the real part of the above formula, it expresses the fact that the averaged radiated power flowing through is equal to the work on the charges.
| Physical sciences | Electrodynamics | Physics |
4240292 | https://en.wikipedia.org/wiki/Porphyra | Porphyra | Porphyra is a genus of coldwater seaweeds that grow in cold, shallow seawater. More specifically, it belongs to red algae phylum of laver species (from which comes laverbread), comprising approximately 70 species. It grows in the intertidal zone, typically between the upper intertidal zone and the splash zone in cold waters of temperate oceans. In East Asia, it is used to produce the sea vegetable products nori (in Japan) and gim (in Korea). There are considered to be 60–70 species of Porphyra worldwide and seven around Britain and Ireland, where it has been traditionally used to produce edible sea vegetables on the Irish Sea coast. The species Porphyra purpurea has one of the largest plastid genomes known, with 251 genes.
Life cycle
Porphyra displays a heteromorphic alternation of generations. The thallus we see is the haploid generation; it can reproduce asexually by forming spores which grow to replicate the original thallus. It can also reproduce sexually. Both male and female gametes are formed on the one thallus. The female gametes while still on the thallus are fertilized by the released male gametes, which are non-motile. The fertilized, now diploid, carposporangia after mitosis produce spores (carpospores) which settle, then bore into shells, germinate and form a filamentous stage. This stage was originally thought to be a different species of alga, and was referred to as Conchocelis rosea. That Conchocelis was the diploid stage of Porphyra was discovered in 1949 by the British phycologist Kathleen Mary Drew-Baker for the European species Porphyra umbilicalis. It was later shown for species from other regions as well.
Food
Most human cultures with access to use it as a food or somehow in the diet, making it perhaps the most domesticated of the marine algae, known as laver, (Vietnamese), nori (Japanese:), amanori (Japanese), zakai, gim (Korean:), zǐcài (Chinese:), karengo, sloke or slukos. The marine red alga Porphyra has been cultivated extensively in many Asian countries as an edible seaweed used to wrap the rice and fish that compose the Japanese food sushi and the Korean food gimbap. In Japan, the annual production of Porphyra species is valued at 100 billion yen (US$1 billion).
is harvested from the coasts of Great Britain and Ireland, where it has a variety of culinary uses, including laverbread. In Hawaii, "the species is considered a delicacy, called ". Porphyra was also harvested by the Southern Kwakiutl, Haida, Seechelt, Squawmish, Nuu-chah-nulth, Nuxalk, Tsimshian, and Tlingit peoples of the North American Pacific coast.
Vitamin B12
Porphyra contains vitamin B12 and one study suggests that it is the most suitable non-meat source of this essential vitamin. In the view of the Academy of Nutrition and Dietetics, however, it may not provide an adequate source of B12 for vegans.
Species
Porphyra currently contains 57 confirmed species and 14 unconfirmed species.
Confirmed
Unconfirmed
Following a major reassessment of the genus in 2011, many species previously included in Porphyra have been transferred to Pyropia: for example Pyropia tenera, Pyropia yezoensis, and the species from New Zealand Pyropia rakiura and Pyropia virididentata, leaving only five species out of seventy still within Porphyra itself.
| Biology and health sciences | Bikonts | Plants |
4240766 | https://en.wikipedia.org/wiki/Upland%20and%20lowland | Upland and lowland | Upland and lowland are conditional descriptions of a plain based on elevation above sea level. In studies of the ecology of freshwater rivers, habitats are classified as upland or lowland.
Definitions
Upland and lowland are portions of a plain that are conditionally categorized by their elevation above the sea level. Lowlands are usually no higher than , while uplands are somewhere around to . On unusual occasions, certain lowlands such as the Caspian Depression lie below sea level. Uplands areas tend to spike into valleys and mountains, forming mountain ranges while lowland areas tend to be uniformly flat, although both can vary such as the Mongolian Plateau.
Upland habitats are cold, clear and rocky whose rivers are fast-flowing in mountainous areas; lowland habitats are warm with slow-flowing rivers found in relatively flat lowland areas, with water that is frequently colored by sediment and organic matter.
These classifications overlap with the geological definitions of "upland" and "lowland". In geology an "upland" is generally considered to be land that is at a higher elevation than the alluvial plain or stream terrace, which are considered to be "lowlands". The term "bottomland" refers to low-lying alluvial land near a river.
Much freshwater fish and invertebrate communities around the world show a pattern of specialization into upland or lowland river habitats. Classifying rivers and streams as upland or lowland is important in freshwater ecology, as the two types of river habitat are very different, and usually support very different populations of fish and invertebrate species.
Uplands
In freshwater ecology, upland rivers and streams are the fast-flowing rivers and streams that drain elevated or mountainous country, often onto broad alluvial plains (where they become lowland rivers). However, elevation is not the sole determinant of whether a river is upland or lowland. Arguably the most important determinants are those of stream power and stream gradient. Rivers with a course that drops rapidly in elevation will have faster water flow and higher stream power or "force of water". This in turn produces the other characteristics of an upland river—an incised course, a river bed dominated by bedrock and coarse sediments, a riffle and pool structure and cooler water temperatures. Rivers with a course that drops in elevation very slowly will have slower water flow and lower force. This in turn produces the other characteristics of a lowland river—a meandering course lacking rapids, a river bed dominated by fine sediments and higher water temperatures. Lowland rivers tend to carry more suspended sediment and organic matter as well, but some lowland rivers have periods of high water clarity in seasonal low-flow periods.
The generally clear, cool, fast-flowing waters and bedrock and coarse sediment beds of upland rivers encourage fish species with limited temperature tolerances, high oxygen needs, strong swimming ability and specialised reproductive strategies to prevent eggs or larvae being swept away. These characteristics also encourage invertebrate species with limited temperature tolerances, high oxygen needs and ecologies revolving around coarse sediments and interstices or "gaps" between those coarse sediments.
The term "upland" is also used in wetland ecology, where "upland" plants indicate an area that is not a wetland.
Lowlands
The generally more turbid, warm, slow-flowing waters and fine sediment beds of lowland rivers encourage fish species with broad temperature tolerances and greater tolerances to low oxygen levels, and life history and breeding strategies adapted to these and other traits of lowland rivers. These characteristics also encourage invertebrate species with broad temperature tolerances and greater tolerances to low oxygen levels and ecologies revolving around fine sediments or alternative habitats such as submerged woody debris ("snags") or submergent macrophytes ("water weed").
Lowland alluvial plains
Lowland alluvial plains form when there is deposition of sediment over a long period of time by one or more rivers coming from highland regions, and then are deposited in lowland regions for long periods of time. Examples include American Bottom, a flood plain of the Mississippi River in Southern Illinois, Bois Brule Bottom, and Bottomland hardwood forest a deciduous hardwood forest found in broad lowland floodplains of the United States.
| Physical sciences | Landforms: General | Earth science |
4242000 | https://en.wikipedia.org/wiki/Power%20system%20protection | Power system protection | Power system protection is a branch of electrical power engineering that deals with the protection of electrical power systems from faults through the disconnection of faulted parts from the rest of the electrical network. The objective of a protection scheme is to keep the power system stable by isolating only the components that are under fault, whilst leaving as much of the network as possible in operation. The devices that are used to protect the power systems from faults are called protection devices.
Components
Protection systems usually comprise five components
Current and voltage transformers to step down the high voltages and currents of the electrical power system to convenient levels for the relays to deal with
Protective relays to sense the fault and initiate a trip, or disconnection, order
Circuit breakers or RCDs to open/close the system based on relay and autorecloser commands
Batteries to provide power in case of power disconnection in the system
Communication channels to allow analysis of current and voltage at remote terminals of a line and to allow remote tripping of equipment.
For parts of a distribution system, fuses are capable of both sensing and disconnecting faults.
Failures may occur in each part, such as insulation failure, fallen or broken transmission lines, incorrect operation of circuit breakers, short circuits and open circuits. Protection devices are installed with the aims of protection of assets and ensuring continued supply of energy.
Switchgear is a combination of electrical disconnect switches, fuses or circuit breakers used to control, protect and isolate electrical equipment. Switches are safe to open under normal load current (some switches are not safe to operate under normal or abnormal conditions), while protective devices are safe to open under fault current. Very important equipment may have completely redundant and independent protective systems, while a minor branch distribution line may have very simple low-cost protection.
Types of protection
High-voltage transmission network
Protection of the transmission and distribution system serves two functions: protection of the plant and protection of the public (including employees). At a basic level, protection disconnects equipment that experiences an overload or a short to earth. Some items in substations such as transformers might require additional protection based on temperature or gas pressure, among others.
Generator sets
In a power plant, the protective relays are intended to prevent damage to alternators or to the transformers in case of abnormal conditions of operation, due to internal failures, as well as insulating failures or regulation malfunctions. Such failures are unusual, so the protective relays have to operate very rarely. If a protective relay fails to detect a fault, the resulting damage to the alternator or to the transformer might require costly equipment repairs or replacement, as well as income loss from the inability to produce and sell energy.
Overload and back-up for distance (overcurrent)
Overload protection requires a current transformer which simply measures the current in a circuit and compares it to the predetermined value. There are two types of overload protection: instantaneous overcurrent (IOC) and time overcurrent (TOC). Instantaneous overcurrent requires that the current exceeds a predetermined level for the circuit breaker to operate. Time overcurrent protection operates based on a current vs time curve. Based on this curve, if the measured current exceeds a given level for the preset amount of time, the circuit breaker or fuse will operate. The function of both types is explained in .
Earth fault/ground fault
Earth fault protection also requires current transformers and senses an imbalance in a three-phase circuit. Normally the three phase currents are in balance, i.e. roughly equal in magnitude. If one or two phases become connected to earth via a low impedance path, their magnitudes will increase dramatically, as will current imbalance. If this imbalance exceeds a pre-determined value, a circuit breaker should operate. Restricted earth fault protection is a type of earth fault protection which looks for earth fault between two sets of current transformers (hence restricted to that zone).
Distance (impedance relay)
Distance protection detects both voltage and current. A fault on a circuit will generally create a sag in the voltage level. If the ratio of voltage to current measured at the relay terminals, which equates to an impedance, lands within a predetermined level the circuit breaker will operate. This is useful for reasonably long lines, lines longer than 10 miles, because their operating characteristics are based on the line characteristics. This means that when a fault appears on the line the impedance setting in the relay is compared to the apparent impedance of the line from the relay terminals to the fault. If the relay setting is determined to be below the apparent impedance it is determined that the fault is within the zone of protection. When the transmission line length is too short, less than 10 miles, distance protection becomes more difficult to coordinate. In these instances the best choice of protection is current differential protection.
Back-up
The objective of protection is to remove only the affected portion of plant and nothing else. A circuit breaker or protection relay may fail to operate. In important systems, a failure of primary protection will usually result in the operation of back-up protection. Remote back-up protection will generally remove both the affected and unaffected items of plant to clear the fault. Local back-up protection will remove the affected items of the plant to clear the fault.
Low-voltage networks
The low-voltage network generally relies upon fuses or low-voltage circuit breakers to remove both overload and earth faults.
Cybersecurity
The bulk system which is a large interconnected electrical system including transmission and control system is experiencing new cybersecurity threats every day. (“Electric Grid Cybersecurity,” 2019). Most of these attacks are aiming the control systems in the grids. These control systems are connected to the internet and makes it easier for hackers to attack them. These attacks can cause damage to equipment and limit the utility professionals ability to control the system.
Coordination
Protective device coordination is the process of determining the "best fit" timing of current interruption when abnormal electrical conditions occur. The goal is to minimize an outage to the greatest extent possible. Historically, protective device coordination was done on translucent log–log paper. Modern methods normally include detailed computer based analysis and reporting.
Protection coordination is also handled through dividing the power system into protective zones. If a fault were to occur in a given zone, necessary actions will be executed to isolate that zone from the entire system. Zone definitions account for generators, buses, transformers, transmission and distribution lines, and motors. Additionally, zones possess the following features: zones overlap, overlap regions denote circuit breakers, and all circuit breakers in a given zone with a fault will open in order to isolate the fault. Overlapped regions are created by two sets of instrument transformers and relays for each circuit breaker. They are designed for redundancy to eliminate unprotected areas; however, overlapped regions are devised to remain as small as possible such that when a fault occurs in an overlap region and the two zones which encompass the fault are isolated, the sector of the power system which is lost from service is still small despite two zones being isolated.
Disturbance-monitoring equipment
Disturbance-monitoring equipment (DME) monitors and records system data pertaining to a fault. DME accomplish three main purposes:
model validation,
disturbance investigation, and
assessment of system protection performance.
DME devices include:
Sequence of event recorders, which record equipment response to the event
Fault recorders, which record actual waveform data of the system primary voltages and currents
Dynamic disturbance recorders (DDRs), which record incidents that portray power system behavior during dynamic events such as low frequency (0.1 Hz – 3 Hz) oscillations and abnormal frequency or voltage excursions
Performance measures
Protection engineers define dependability as the tendency of the protection system to operate correctly for in-zone faults. They define security as the tendency not to operate for out-of-zone faults. Both dependability and security are reliability issues. Fault tree analysis is one tool with which a protection engineer can compare the relative reliability of proposed protection schemes. Quantifying protection reliability is important for making the best decisions on improving a protection system, managing dependability versus security tradeoffs, and getting the best results for the least money. A quantitative understanding is essential in the competitive utility industry.
Reliability: Devices must function consistently when fault conditions occur, regardless of possibly being idle for months or years. Without this reliability, systems may cause costly damages.
Selectivity: Devices must avoid unwarranted, false trips.
Speed: Devices must function quickly to reduce equipment damage and fault duration, with only very precise intentional time delays.
Sensitivity: Devices must detect even the smallest value of faults and respond.
Economy: Devices must provide maximum protection at minimum cost.
Simplicity: Devices must minimize protection circuitry and equipment.
Reliability: Dependability vs Security
There are two aspects of reliable operation of protection systems: dependability and security. Dependability is the ability of the protection system to operate when called upon to remove a faulted element from the power system. Security is the ability of the protection system to restrain itself from operating during an external fault. Choosing the appropriate balance between security and dependability in designing the protection system requires engineering judgement and varies on a case-by-case basis.
| Technology | Electrical protective devices | null |
5651850 | https://en.wikipedia.org/wiki/Coryphodon | Coryphodon | Coryphodon (from Greek , "point", and , "tooth", meaning peaked tooth, referring to "the development of the angles of the ridges into points [on the molars].") is an extinct genus of pantodonts of the family Coryphodontidae.
Coryphodon was a pantodont, a member of the world's first group of large browsing mammals. It migrated across what is now northern North America, replacing Barylambda, an earlier pantodont. It is regarded as the ancestor of the genus Hypercoryphodon of Late Eocene Mongolia.
Coryphodon is known from many specimens in North America and considerably fewer in Europe, Mongolia, and China. It is a small to medium-sized coryphodontid who differs from other members of the family in dental characteristics.
Description
At about at shoulder height and in body length, Coryphodon is one of the largest-known mammals of its time. The creature was very slow, with long upper limbs and short lower limbs, which were needed to support its weight. Coryphodon does not seem to have been in need of much in the way of defences, however, since most known predators of the time seem to have been much smaller than Coryphodon.
Coryphodon had one of the smallest brain/body ratios of any mammal, living or extinct, possessing a brain weighing just and a body weight of around .
Estimates of Coryphodon's body mass have varied considerably. Based on a regression analysis of ungulates, estimated the mean body mass for the type species C. eocaenus to , for C. radians, and possibly as much as for C. proterus and C. lobatus.
Taxonomy and systematics
Since the first fossil was found in Wyoming, the taxonomy of Coryphodon and its family have been in disarray – five described genera have been synonymized with Coryphodon and thirty-five proposed species have been declared invalid.
Species
C. anax was named by Cope (1882); it was synonymized with Coryphodon lobatus by Osborn (1898) and Uhen and Gingerich (1995).
C. anthracoideus was named by de Blainville (1846).
C. armatus was named by Cope (1872).
C. dabuensis was named by Zhai (1978).
C. eocaenus was named by Owen (1846); it was reassigned to Lophiodon eocaenum by Blainville (1846); it was revalidated by Cope (1877), Lucas (1984) and Uhen and Gingerich (1995).
C. gosseleti (=C. grosseleti lapsus calami) was named by Malaquin (1899).
C. hamatus was named by Marsh (1876); it was synonymized with Coryphodon anthracoideus by Lucas (1984) and Lucas and Schoch (1990); it was synonymized with Coryphodon radians by Uhen and Gingerich (1995).
C. lobatus was named by Cope (1877).
C. marginatus was named by Cope (1882); it was synonymized with Coryphodon eocaenus by Lucas (1984) and Uhen and Gingerich (1995).
C. oweni was named by Hebert (1856).
C. pisuqti was named by Dawson (2012)
C. proterus was named by Simons (1960).
C. repandus was named by Cope (1882); it was synonymized with Coryphodon radians by Uhen and Gingerich (1995).
C. radians was named by Cope (1872).
C. singularis? was named by Osborn (1898); it is a nomen dubium due its to pathology.
C. subquadratus? was named by Cope (1882); it was synonymized with Manteodon.
C. tsaganensis was named by Reshetov (1976)
C. ventanus was named by Osborn (1898); it was synonymized with Coryphodon lobatus by Uhen and Gingerich (1995).
Synonyms
Bathmodon radians was named by Cope (1872); it was synonymized with Coryphodon anthracoideus by Lucas (1998b); it was reassigned to Coryphodon radians by Cope (1877), Simpson (1948a), Simpson (1951), Simpson (1981) and Uhen and Gingerich (1995).
Bathmodon semicinctus was named by Cope (1872); it was reassigned to Loxolophodon semicinctus by Cope (1872); it was revalidated by Cope (1873); it was reassigned to Coryphodon semicinctus by Wheeler (1961); it was synonymized with Coryphodon radians by Gazin (1962); it was considered a nomen dubium by Uhen and Gingerich (1995).
Ectacodon cinctus was named by Cope (1882); it was reassigned to Coryphodon cinctus by Osborn (1898); it was synonymized with Coryphodon radians by Uhen and Gingerich (1995).
Letalophodon?
Loxolophodon? was named by Cope, (1872)
Manteodon subquadratus was named by Cope (1882); it was reassigned to Coryphodon subquadratus by Lucas (1984); it was synonymized with Coryphodon radians by Uhen and Gingerich (1995).
Metalophodon testis was named by Cope (1882); it was reassigned to Coryphodon testis by Osborn (1898); it was synonymized with Coryphodon radians by Uhen and Gingerich (1995).
Size evolution
Coryphodon evolved from the Late Paleocene C. proterus, one of the largest species found and the only one known from the Clarkforkian NALMA. The body size then decreased until C. eocaenus appears at the Clarkforkian-Wasatchian transition (55.4 Ma, near the PETM), from where Coryphodon evolved into the large species C. radians. C. radians in its turn evolved into two contemporaneous species that appear in the Early Eocene, the small C. armatus and the very large C. lobatus. These changes in size are thought to be linked to global climate change, with the size minimum in the Coryphodon lineage occurring shortly after Paleocene-Eocene boundary.
Paleobiology
Feeding and diet
Coryphodon had a semi-aquatic lifestyle, likely living in swamps and marshes like a hippopotamus, although it was not closely related to modern hippos or any other animal known today. Coryphodon had very strong neck muscles and short tusks that were probably used to uproot swamp plants. The other teeth in the mouth were suited for processing plants that had been grabbed by browsing.
Fossils found on Ellesmere Island, near Greenland, show that Coryphodon once lived there in warm swamp forests of huge trees, similar to the modern cypress swamps of the American South. Though the climate of the Eocene was much warmer than today, plants and animals living north of the Arctic Circle still experienced months of complete darkness and 24-hour summer days. Isotopic studies of tooth enamel revealed that during the summer period of extended daylight Coryphodon would eat soft vegetation such as flowering plants, aquatic plants and leaves. However during the extended periods of darkness when plant photosynthesis was impossible, Coryphodon would switch to a diet of leaf litter, twigs, evergreen needles and most revealingly fungi, an organism and food source that does not require light to grow. Not only does this study reveal the dietary range of Coryphodon, but it also reveals the behaviour of the northern populations living within the Arctic Circle. In this respect, Coryphodon did not migrate south or hibernate, it simply switched between two seasonal food sources.
Sexual dimorphism
noticed a sexual dimorphism in Coryphodon: the canines tend to be either very large or very small compared to cheek teeth, and, comparing to modern hippos, there is reason to assume males had larger canines than females.
| Biology and health sciences | Mammals: General | Animals |
5654027 | https://en.wikipedia.org/wiki/Synapse | Synapse | In the nervous system, a synapse is a structure that allows a neuron (or nerve cell) to pass an electrical or chemical signal to another neuron or a target effector cell. Synapses can be classified as either chemical or electrical, depending on the mechanism of signal transmission between neurons. In the case of electrical synapses, neurons are coupled bidirectionally with each other through gap junctions and have a connected cytoplasmic milieu. These types of synapses are known to produce synchronous network activity in the brain, but can also result in complicated, chaotic network level dynamics. Therefore, signal directionality cannot always be defined across electrical synapses.
Synapses are essential for the transmission of neuronal impulses from one neuron to the next, playing a key role in enabling rapid and direct communication by creating circuits. In addition, a synapse serves as a junction where both the transmission and processing of information occur, making it a vital means of communication between neurons.
At the synapse, the plasma membrane of the signal-passing neuron (the presynaptic neuron) comes into close apposition with the membrane of the target (postsynaptic) cell. Both the presynaptic and postsynaptic sites contain extensive arrays of molecular machinery that link the two membranes together and carry out the signaling process. In many synapses, the presynaptic part is located on the terminals of axons and the postsynaptic part is located on a dendrite or soma. Astrocytes also exchange information with the synaptic neurons, responding to synaptic activity and, in turn, regulating neurotransmission. Synapses (at least chemical synapses) are stabilized in position by synaptic adhesion molecules (SAMs) projecting from both the pre- and post-synaptic neuron and sticking together where they overlap; SAMs may also assist in the generation and functioning of synapses. Moreover, SAMs coordinate the formation of synapses, with various types working together to achieve the remarkable specificity of synapses. In essence, SAMs function in both excitatory and inhibitory synapses, likely serving as the mediator for signal transmission.
History
Santiago Ramón y Cajal proposed that neurons are not continuous throughout the body, yet still communicate with each other, an idea known as the neuron doctrine. The word "synapse" was introduced in 1897 by the English neurophysiologist Charles Sherrington in Michael Foster's Textbook of Physiology. Sherrington struggled to find a good term that emphasized a union between two separate elements, and the actual term "synapse" was suggested by the English classical scholar Arthur Woollgar Verrall, a friend of Foster. The word was derived from the Greek synapsis (), meaning "conjunction", which in turn derives from synaptein (), from syn () "together" and haptein () "to fasten".
However, while the synaptic gap remained a theoretical construct, and was sometimes reported as a discontinuity between contiguous axonal terminations and dendrites or cell bodies, histological methods using the best light microscopes of the day could not visually resolve their separation which is now known to be about 20 nm. It needed the electron microscope in the 1950s to show the finer structure of the synapse with its separate, parallel pre- and postsynaptic membranes and processes, and the cleft between the two.
Types
Chemical and electrical synapses are two ways of synaptic transmission.
In a chemical synapse, electrical activity in the presynaptic neuron is converted (via the activation of voltage-gated calcium channels) into the release of a chemical called a neurotransmitter that binds to receptors located in the plasma membrane of the postsynaptic cell. The neurotransmitter may initiate an electrical response or a secondary messenger pathway that may either excite or inhibit the postsynaptic neuron. Chemical synapses can be classified according to the neurotransmitter released: glutamatergic (often excitatory), GABAergic (often inhibitory), cholinergic (e.g. vertebrate neuromuscular junction), and adrenergic (releasing norepinephrine). Because of the complexity of receptor signal transduction, chemical synapses can have complex effects on the postsynaptic cell.
In an electrical synapse, the presynaptic and postsynaptic cell membranes are connected by special channels called gap junctions that are capable of passing an electric current, causing voltage changes in the presynaptic cell to induce voltage changes in the postsynaptic cell. In fact, gap junctions facilitate the direct flow of electrical current without the need for neurotransmitters, as well as small molecules like calcium. Thus, the main advantage of an electrical synapse is the rapid transfer of signals from one cell to the next.
Mixed chemical electrical synapses are synaptic sites that feature both a gap junction and neurotransmitter release. This combination allows a signal to have both a fast component (electrical) and a slow component (chemical).
The formation of neural circuits in nervous systems appears to heavily depend on the crucial interactions between chemical and electrical synapses. Thus these interactions govern the generation of synaptic transmission. Synaptic communication is distinct from an ephaptic coupling, in which communication between neurons occurs via indirect electric fields. An autapse is a chemical or electrical synapse that forms when the axon of one neuron synapses onto dendrites of the same neuron.
Excitatory and inhibitory
Excitatory synapse: Enhances the probability of depolarization in postsynaptic neurons and the initiation of an action potential.
Inhibitory synapse: Diminishes the probability of depolarization in postsynaptic neurons and the initiation of an action potential.
An influx of Na+ driven by excitatory neurotransmitters opens cation channels, depolarizing the postsynaptic membrane toward the action potential threshold. In contrast, inhibitory neurotransmitters cause the postsynaptic membrane to become less depolarized by opening either Cl- or K+ channels, reducing firing. Depending on their release location, the receptors they bind to, and the ionic circumstances they encounter, various transmitters can be either excitatory or inhibitory. For instance, acetylcholine can either excite or inhibit depending on the type of receptors it binds to. For example, glutamate serves as an excitatory neurotransmitter, in contrast to GABA, which acts as an inhibitory neurotransmitter. Additionally, dopamine is a neurotransmitter that exerts dual effects, displaying both excitatory and inhibitory impacts through binding to distinct receptors.
The membrane potential prevents Cl- from entering the cell, even when its concentration is much higher outside than inside. The reversal potential for Cl- in many neurons is quite negative, nearly equal to the resting potential. Opening Cl- channels tends to buffer the membrane potential, but this effect is countered when the membrane starts to depolarize, allowing more negatively charged Cl- ions to enter the cell. Consequently, it becomes more difficult to depolarize the membrane and excite the cell when Cl- channels are open. Similar effects result from the opening of K+ channels. The significance of inhibitory neurotransmitters is evident from the effects of toxins that impede their activity. For instance, strychnine binds to glycine receptors, blocking the action of glycine and leading to muscle spasms, convulsions, and death.
Interfaces
Synapses can be classified by the type of cellular structures serving as the pre- and post-synaptic components. The vast majority of synapses in the mammalian nervous system are classical axo-dendritic synapses (axon synapsing upon a dendrite), however, a variety of other arrangements exist. These include but are not limited to axo-axonic, dendro-dendritic, axo-secretory, axo-ciliary, somato-dendritic, dendro-somatic, and somato-somatic synapses.
In fact, the axon can synapse onto a dendrite, onto a cell body, or onto another axon or axon terminal, as well as into the bloodstream or diffusely into the adjacent nervous tissue.
Conversion of chemical into electrical signals
Neurotransmitters are tiny signal molecules stored in membrane-enclosed synaptic vesicles and released via exocytosis. Indeed, a change in electrical potential in the presynaptic cell triggers the release of these molecules. By attaching to transmitter-gated ion channels, the neurotransmitter causes an electrical alteration in the postsynaptic cell and rapidly diffuses across the synaptic cleft. Once released, the neurotransmitter is swiftly eliminated, either by being absorbed by the nerve terminal that produced it, taken up by nearby glial cells, or broken down by specific enzymes in the synaptic cleft. Numerous Na+-dependent neurotransmitter carrier proteins recycle the neurotransmitters and enable the cells to maintain rapid rates of release.
At chemical synapses, transmitter-gated ion channels play a vital role in rapidly converting extracellular chemical impulses into electrical signals. These channels are located in the postsynaptic cell's plasma membrane at the synapse region, and they temporarily open in response to neurotransmitter molecule binding, causing a momentary alteration in the membrane's permeability. Additionally, transmitter-gated channels are comparatively less sensitive to the membrane potential than voltage-gated channels, which is why they are unable to generate self-amplifying excitement on their own. However, they result in graded variations in membrane potential due to local permeability, influenced by the amount and duration of neurotransmitter released at the synapse.
Recently, mechanical tension, a phenomenon never thought relevant to synapse function has been found to be required for those on hippocampal neurons to fire.
Release of neurotransmitters
Neurotransmitters bind to ionotropic receptors on postsynaptic neurons, either causing their opening or closing. The variations in the quantities of neurotransmitters released from the presynaptic neuron may play a role in regulating the effectiveness of synaptic transmission. In fact, the concentration of cytoplasmic calcium is involved in regulating the release of neurotransmitters from presynaptic neurons.
The chemical transmission involves several sequential processes:
Synthesizing neurotransmitters within the presynaptic neuron.
Loading the neurotransmitters into secretory vesicles.
Controlling the release of neurotransmitters into the synaptic cleft.
Binding of neurotransmitters to postsynaptic receptors.
Ceasing the activity of the released neurotransmitters.
Synaptic polarization
The function of neurons depends upon cell polarity. The distinctive structure of nerve cells allows action potentials to travel directionally (from dendrites to cell body down the axon), and for these signals to then be received and carried on by post-synaptic neurons or received by effector cells. Nerve cells have long been used as models for cellular polarization, and of particular interest are the mechanisms underlying the polarized localization of synaptic molecules. PIP2 signaling regulated by IMPase plays an integral role in synaptic polarity.
Phosphoinositides (PIP, PIP2, and PIP3) are molecules that have been shown to affect neuronal polarity. A gene (ttx-7) was identified in Caenorhabditis elegans that encodes myo-inositol monophosphatase (IMPase), an enzyme that produces inositol by dephosphorylating inositol phosphate. Organisms with mutant ttx-7 genes demonstrated behavioral and localization defects, which were rescued by expression of IMPase. This led to the conclusion that IMPase is required for the correct localization of synaptic protein components. The egl-8 gene encodes a homolog of phospholipase Cβ (PLCβ), an enzyme that cleaves PIP2. When ttx-7 mutants also had a mutant egl-8 gene, the defects caused by the faulty ttx-7 gene were largely reversed. These results suggest that PIP2 signaling establishes polarized localization of synaptic components in living neurons.
Presynaptic modulation
Modulation of neurotransmitter release by G-protein-coupled receptors (GPCRs) is a prominent presynaptic mechanism for regulation of synaptic transmission. The activation of GPCRs located at the presynaptic terminal, can decrease the probability of neurotransmitter release. This presynaptic depression involves activation of Gi/o-type G-proteins that mediate different inhibitory mechanisms, including inhibition of voltage-gated calcium channels, activation of potassium channels, and direct inhibition of the vesicle fusion process.
Endocannabinoids, synthesized in and released from postsynaptic neuronal elements and their cognate receptors, including the (GPCR) CB1 receptor located at the presynaptic terminal, are involved in this modulation by a retrograde signaling process, in which these compounds are synthesized in and released from postsynaptic neuronal elements and travel back to the presynaptic terminal to act on the CB1 receptor for short-term or long-term synaptic depression, that causes a short or long lasting decrease in neurotransmitter release.
Effects of drugs on ligand-gated ion channels
Drugs have long been considered crucial targets for transmitter-gated ion channels. The majority of medications utilized to treat schizophrenia, anxiety, depression, and sleeplessness work at chemical synapses, and many of these pharmaceuticals function by binding to transmitter-gated channels. For instance, some drugs like barbiturates and tranquilizers bind to GABA receptors and enhance the inhibitory effect of GABA neurotransmitter. Thus, reduced concentration of GABA enables the opening of Cl- channels.
Furthermore, psychoactive drugs could potentially target many other synaptic signalling machinery components. In fact, numerous neurotransmitters are released by Na+-driven carriers and are subsequently removed from the synaptic cleft. By inhibiting such carriers, synaptic transmission is strengthened as the action of the transmitter is prolonged. For example, Prozac is an antidepressant medication that works by preventing the absorption of serotonin neurotransmitter. Also, other antidepressants operate by inhibiting the reabsorption of both serotonin and norepinephrine.
Biogenesis
In nerve terminals, synaptic vesicles are produced quickly to compensate for their rapid depletion during neurotransmitter release. Their biogenesis involves segregating synaptic vesicle membrane proteins from other cellular proteins and packaging those distinct proteins into vesicles of appropriate size. Besides, it entails the endocytosis of synaptic vesicle membrane proteins from the plasma membrane.
Synaptoblastic and synaptoclastic refer to synapse-producing and synapse-removing activities within the biochemical signalling chain. This terminology is associated with the Bredesen Protocol for treating Alzheimer's disease, which conceptualizes Alzheimer's as an imbalance between these processes. As of October 2023, studies concerning this protocol remain small and few results have been obtained within a standardized control framework.
Role in memory
Potentiation and depression
It is widely accepted that the synapse plays a key role in the formation of memory. The stability of long-term memory can persist for many years; nevertheless, synapses, the neurological basis of memory, are very dynamic. The formation of synaptic connections significantly depends on activity-dependent synaptic plasticity observed in various synaptic pathways. Indeed, the connection between memory formation and alterations in synaptic efficacy enables the reinforcement of neuronal interactions between neurons. As neurotransmitters activate receptors across the synaptic cleft, the connection between the two neurons is strengthened when both neurons are active at the same time, as a result of the receptor's signaling mechanisms. The strength of two connected neural pathways is thought to result in the storage of information, resulting in memory. This process of synaptic strengthening is known as long-term potentiation (LTP).
By altering the release of neurotransmitters, the plasticity of synapses can be controlled in the presynaptic cell. The postsynaptic cell can be regulated by altering the function and number of its receptors. Changes in postsynaptic signaling are most commonly associated with a N-methyl-d-aspartic acid receptor (NMDAR)-dependent LTP and long-term depression (LTD) due to the influx of calcium into the post-synaptic cell, which are the most analyzed forms of plasticity at excitatory synapses.
Mechanism of protein kinase
Moreover, Ca2+/calmodulin (CaM)-dependent protein kinase II (CaMKII) is best recognized for its roles in the brain, particularly in the neocortex and hippocampal regions because it serves as a ubiquitous mediator of cellular Ca2+ signals. CaMKII is abundant in the nervous system, mainly concentrated in the synapses in the nerve cells. Indeed, CaMKII has been definitively identified as a key regulator of cognitive processes, such as learning, and neural plasticity. The first concrete experimental evidence for the long-assumed function of CaMKII in memory storage was demonstrated
While Ca2+/CaM binding stimulates CaMKII activity, Ca2+-independent autonomous CaMKII activity can also be produced by a number of other processes. CaMKII becomes active by autophosphorylating itself upon Ca2+/calmodulin binding. CaMKII is still active and phosphorylates itself even after Ca2+ is cleaved; as a result, the brain stores long-term memories using this mechanism. Nevertheless, when the CaMKII enzyme is dephosphorylated by a phosphatase enzyme, it becomes inactive, and memories are lost. Hence, CaMKII plays a vital role in both the induction and maintenance of LTP.
Experimental models
For technical reasons, synaptic structure and function have been historically studied at unusually large model synapses, for example:
Squid giant synapse
Neuromuscular junction (NMJ), a cholinergic synapse in vertebrates, glutamatergic in insects
Ciliary calyx in the ciliary ganglion of chicks
Calyx of Held in the brainstem
Ribbon synapse in the retina
Schaffer collateral synapses in the hippocampus. These synapses are small, but their pre- and postsynaptic neurons are well separated (CA3 and CA1, respectively).
Synapses and diseases
Synapses function as ensembles within particular brain networks to control the amount of neuronal activity, which is essential for memory, learning, and behavior. Consequently, synaptic disruptions might have negative effects. In fact, alterations in cell-intrinsic molecular systems or modifications to environmental biochemical processes can lead to synaptic dysfunction. The synapse is the primary unit of information transfer in the nervous system, and correct synaptic contact creation during development is essential for normal brain function. In addition, several mutations have been connected to neurodevelopmental disorders, and that compromised function at different synapse locations is a hallmark of neurodegenerative diseases.
Synaptic defects are causally associated with early appearing neurological diseases, including autism spectrum disorders (ASD), schizophrenia (SCZ), and bipolar disorder (BP). On the other hand, in late-onset degenerative pathologies, such as Alzheimer's (AD), Parkinson's (PD), and Huntington's (HD) diseases, synaptopathy is thought to be the inevitable end-result of an ongoing pathophysiological cascade. These diseases are identified by a gradual loss in cognitive and behavioral function and a steady loss of brain tissue. Moreover, these deteriorations have been mostly linked to the gradual build-up of protein aggregates in neurons, the composition of which may vary based on the pathology; all have the same deleterious effects on neuronal integrity. Furthermore, the high number of mutations linked to synaptic structure and function, as well as dendritic spine alterations in post-mortem tissue, has led to the association between synaptic defects and neurodevelopmental disorders, such as ASD and SCZ, characterized by abnormal behavioral or cognitive phenotypes.
Nevertheless, due to limited access to human tissue at late stages and a lack of thorough assessment of the essential components of human diseases in the available experimental animal models, it has been difficult to fully grasp the origin and role of synaptic dysfunction in neurological disorders.
Additional images
| Biology and health sciences | Nervous system | Biology |
5655064 | https://en.wikipedia.org/wiki/Owner%27s%20manual | Owner's manual | An owner's manual (also called an instruction manual or a user guide) is an instructional book or booklet that is supplied with almost all technologically advanced consumer products such as vehicles, home appliances and computer peripherals.
Information contained in the owner's manual typically includes:
Safety instructions; for liability reasons these can be extensive, often including warnings against performing operations that are ill-advised for product longevity or overall user safety reasons.
Assembly instructions; for products that arrive in pieces for easier shipping.
Installation instructions; for products that need to be installed in a home or workplace.
Setup instructions; for devices that keep track of time or which maintain user accessible state.
Instructions for normal or intended operations.
Programming instructions; for microprocessor controlled products such as VCRs, programmable calculators, and synthesizers.
Maintenance instructions.
Troubleshooting instructions; for when the product does not work as expected.
Service locations; for when the product requires repair by a factory authorized technician.
Regulatory code compliance information; for example with respect to safety or electromagnetic interference.
Product technical specifications.
Warranty information; sometimes provided as a separate sheet.
Until the last decade or two of the twentieth century it was common for an owner's manual to include detailed repair information, such as a circuit diagram; however as products became more complex this information was gradually relegated to specialized service manuals, or dispensed with entirely, as devices became too inexpensive to be economically repaired.
Owner's manuals for simpler devices are often multilingual so that the same boxed product can be sold in many different markets. Sometimes the same manual is shipped with a range of related products so the manual will contain a number of sections that apply only to some particular model in the product range.
With the increasing complexity of modern devices, many owner's manuals have become so large that a separate quickstart guide is provided. Some owner's manuals for computer equipment are supplied on CD-ROM to cut down on manufacturing costs, since the owner is assumed to have a computer able to read the CD-ROM. Another trend is to supply instructional video material with the product, such as a videotape or DVD, along with the owner's manual.
Many businesses offer PDF copies of manuals that can be accessed or downloaded free of charge from their websites.
Installation manual
An installation manual or installation guide is a technical communication document intended to instruct people how to install a particular product. An installation manual is usually written by a technical writer or other technical staff.
Installation is the act of putting something in place so that it is ready for use. An installation manual most commonly describes the safe and correct installation of a product. The term product here relates to any consumer, non-consumer, hardware, software, electrical, electronic or mechanical product that requires installation. The installation of a computer program is also known as the setup.
In case of an installation manual, the installation instruction is a separate document that focuses solely on the person(s) that will perform the installation. However, the installation instruction can also be an integrated part of the overall owner's manual.
The size, structure and content of an installation manual depend heavily on the nature of the product and the needs and capabilities of the intended target group. Furthermore, various standards and directives are available that provide guidance and requirements for the design of instructions.
The international standard IEC 82079 prescribes the required installation topics for an installation instruction. Among these topics, are procedures, diagrams and conditions for installation activities, such as unpacking, mounting and connecting.
For machines the European Machinery Directive prescribes that an instruction manual must contain assembly, installation and connecting instructions, including drawings, diagrams and the means of attachment and the designation of the chassis or installation on which the machinery is to be mounted.
Car owner's manuals
All new cars come with an owner's manual from the manufacturer. Most owners leave them in the glove compartment for easy reference. This can make their frequent absence in rental cars frustrating because it violates the driver's user expectations, as well as makes it difficult to use controls that are not understood, which is not good because understanding control operation of an unfamiliar car is one of the first steps recommended in defensive driving. Owner's manuals usually cover three main areas: a description of the location and operation of all controls; a schedule and descriptions of maintenance required, both by the owner and by a mechanic; and specifications such as oil and fuel capacity and part numbers of light bulbs used. Current car owner's manuals have become much bigger in part due to many safety warnings most likely designed to avoid product liability lawsuits, as well as from ever more complicated audio and navigational systems, which often have their own manual.
If owners lose their car manual, they can either order a replacement from a dealer, pick up a used one secondhand, or download a PDF version of the manual online.
In 2017, IBM released IBM Watson Artificial Intelligence to understand and answer questions in natural driver language. "Ask Mercedes" was the first in a wave of these vehicle assistants which can support both speech and text-based input.
Popular culture
The noun phrase owner's manual has been used by analogy in the title of numerous instructional books about entities that are not manufactured products, such as pets, body parts and businesses.
User guides
The equivalent document for computer software is called a user guide since users are typically licensees rather than owners of the software.
Unicode
The OPEN BOOK (📖) Unicode symbol equals read operator's manual. OPEN BOOK has Unicode code point U+1F4D6.
| Technology | Machinery and tools: General | null |
5655988 | https://en.wikipedia.org/wiki/Anancus | Anancus | Anancus is an extinct genus of "tetralophodont gomphothere" native to Afro-Eurasia, that lived from the Tortonian stage of the late Miocene until its extinction during the Early Pleistocene, roughly from 8.5–2 million years ago.
Taxonomy
Anancus was named by Auguste Aymard in 1855. It is traditionally allocated to Gomphotheriidae, often as the only member of the subfamily Anancinae. Recently, some authors have excluded Anancus along with other tetralophodont gomphotheres from Gomphotheriidae, and regarded them as members of Elephantoidea instead.
Description
Two largely complete individuals of Anancus arvernensis reached shoulder heights of around , with a volumetric estimate suggesting a body mass of around , comparable to living African bush elephants. The tusks were largely straight and lacked enamel (though enamel was present in juveniles) and were slender, and proportionally large, with a large tusk of the species Anancus avernensis from Stoina, Romania measuring in length with an estimated mass of . The tusks varied from projecting forward parallel to each other, to being outwardly divergent from each other, depending on the species. The skull is proportionally tall and short, with an elevated dome and an enlarged tympanic bulla. Unlike more primitive gomphotheres, the mandible was brevirostrine (shortened), and lacked lower tusks. The molars were typically tetralophodont (bearing four crests or ridges) but were pentalophodont in some species. The premolars were absent in all species other than A. kenyensis. On the upper molars, the posterior pretrite central conules were reduced, as were the anterior pretrite central conules on the lower molars. The pretrite and posttrite half-loph(id)s were dislocated from each other, resulting in the successive loph(id)s exhibiting an alternating pattern.
Diet
Dietary preferences of Anancus varied between species. Dental microwear analysis of Anancus arvernensis specimens from the Early Pleistocene of Europe generally suggests that it was a browser, consuming twigs, bark, seeds and fruit, with a browsing diet also proposed for the Early Pliocene South African A. capensis. The East African late Miocene-early Pliocene A. kenyensis and Pliocene A. ultimus have individuals with varying browsing, grazing, and mixed feeding (both browsing and grazing) diets, with a grazing diet proposed for Anancus specimens from the Pliocene of India based on isotopic analysis. Anancus osiris from the Pliocene of North Africa is suggested to have been a mixed feeder with a large grass intake based on microwear.
Evolution
Anancus is suggested to have evolved from Tetralophodon or a Tetralophodon-like ancestor. The oldest known species of Anancus is A. perimensis, with fossils known from the Tortonian ~ 8.5 million years ago Siwalik Hills of Pakistan. Anancus entered Europe approximately 7.2 million years ago and around 7 million years ago dispersed into Africa. Anancus first appeared in China around 6 million years ago (A. sinensis). Anancus disappeared from Asia and Africa around the end of the Pliocene, approximately 2.6 million years ago. The extinction of Anancus in Africa has been attributed to competitive exclusion by elephantids, whose molar teeth were more efficient at processing grass. The European A. arvernensis was the last surviving species, becoming extinct during the Early Pleistocene, around 2 million years ago, with its latest possible record being at Eastern Scheldt in the Netherlands around 1.6 million years ago.
Gallery
| Biology and health sciences | Proboscidea | Animals |
11052024 | https://en.wikipedia.org/wiki/Tropical%20cyclone%20track%20forecasting | Tropical cyclone track forecasting | Tropical cyclone track forecasting involves predicting where a tropical cyclone is going to track over the next five days, every 6 to 12 hours. The history of tropical cyclone track forecasting has evolved from a single-station approach to a comprehensive approach which uses a variety of meteorological tools and methods to make predictions. The weather of a particular location can show signs of the approaching tropical cyclone, such as increasing swell, increasing cloudiness, falling barometric pressure, increasing tides, squalls and heavy rainfall.
The forces that affect tropical cyclone steering are the higher-latitude westerlies, the subtropical ridge, and the beta effect caused by changes of the coriolis force within fluids such as the atmosphere. Accurate track predictions depend on determining the position and strength of high- and low-pressure areas, and predicting how those areas will migrate during the life of a tropical system. Computer forecast models are used to help determine this motion as far out as five to seven days in the future.
History
The methods through which tropical cyclones are forecast have changed with the passage of time. The first known forecasts in the Western Hemisphere were made by Lt. Col. William Reed of the Corps of Royal Engineers at Barbados in 1847. Reed mostly utilized barometric pressure measurements as the basis of his forecasts. Benito Viñes, S.J., introduced a forecast and warning system based on cloud cover changes in Havana during the 1870s. Forecasting hurricane motion was based on tide movements, as well as cloud and barometer changes over time. In 1895, it was noted that cool conditions with unusually high pressure preceded tropical cyclones in the West Indies by several days. Before the early 1900s, most forecasts were done by direct observations at weather stations, which were then relayed to forecast centers via telegraph. It was not until the advent of radio in the early twentieth century that observations from ships at sea were available to forecasters. Despite the issuance of hurricane watches and warnings for systems threatening the coast, forecasting the path of tropical cyclones did not occur until 1920. By 1922, it was known that the winds at to in height above the sea surface within the storms' right front quadrant were representative of a storm's steering, and that hurricanes tended to follow the outermost closed isobar of the subtropical ridge.
In 1937, radiosondes were used to aide tropical cyclone forecasting. The next decade saw the advent of aircraft-based reconnaissance by the military, starting with the first dedicated flight into a hurricane in 1943, and the establishment of the Hurricane Hunters in 1944. In the 1950s, coastal weather radars began to be used in the United States, and research reconnaissance flights by the precursor of the Hurricane Research Division began in 1954. The launch of the first weather satellite, TIROS-I, in 1960, introduced new techniques to tropical cyclone forecasting that remain important to the present day. In the 1970s, buoys were introduced to improve the resolution of surface measurements, which until that point, were not available at all over sea surfaces.
Single station forecasting of a tropical cyclone passage
About four days in advance of a typical tropical cyclone, an ocean of in height will roll in about every 10 seconds, moving towards the coast from the direction of the tropical cyclone's location. The ocean swell will slowly increase in height and frequency the closer a tropical cyclone gets to land. Two days in advance of the center's passage, winds go calm as the tropical cyclone interrupts the environmental wind flow. Within 36 hours of the center passage, the pressure begins to fall and a veil of white cirrus clouds approaches from the cyclone's direction. Within 24 hours of the closest approach to the center, low clouds begin to move in, also known as the bar of a tropical cyclone, as the barometric pressure begins to fall more rapidly and the winds begin to increase. Within 18 hours of the center's approach, squally weather is common, with sudden increases in wind accompanied by rain showers or thunderstorms. Winds increase within 12 hours of the center's approach, occasionally reaching hurricane force. The ocean's surface becomes whipped with foam. Small items begin flying in the wind. Within 6 hours of the center's arrival, rain becomes continuous and the storm surge begins to come inland. Within an hour of the center, the rain becomes very heavy and the highest winds within the tropical cyclone are experienced. When the center arrives with a strong tropical cyclone, weather conditions improve and the sun becomes visible as the eye moves overhead. At this point, the pressure ceases to drop as the lowest pressure within the storm's center is reached. This is also when the peak depth of the storm surge occurs. Once the system departs, winds reverse and, along with the rain, suddenly increase. The storm surge retreats as the pressure suddenly rises in the wake of its center. One day after the center's passage, the low overcast is replaced with a higher overcast, and the rain becomes intermittent. By 36 hours after the center's passage, the high overcast breaks and the pressure begins to level off.
Basics
The large scale synoptic scale flow determines 70 to 90 percent of a tropical cyclone's motion. The deep-layered mean flow through the troposphere is considered to be the best tool in determining track direction and speed. If storms experience significant vertical wind shear, use of a lower level wind such as the 700 hPa pressure level (at a height of above sea level) will work out as a better predictor. Knowledge of the beta effect can be used to steer a tropical cyclone, since it leads to a more northwest heading for tropical cyclones in the Northern Hemisphere due to differences in the coriolis force around the cyclone. For example, the beta effect will allow a tropical cyclone to track poleward and slightly to the right of the deep layer steering flow while the system lies the south of the subtropical ridge. Northwest moving storms move quicker and left, while northeast moving storms move slower and left. The larger the cyclone, the larger the impact of the beta effect is likely to be.
Fujiwhara effect
When two or more tropical cyclones are in proximity to one another, they begin to rotate cyclonically around the midpoint between their circulation centers. In the northern hemisphere, this is in a counterclockwise direction, and in the southern hemisphere, a clockwise direction. Usually, the tropical cyclones need to be within of each other for this effect to take place. It is a more common phenomenon in the northern Pacific Ocean than elsewhere, due to the higher frequency of tropical cyclone activity which occurs in that region.
Trochoidal motions
Small wobbles in a tropical cyclone's track can occur when the convection is distributed unevenly within its circulation. This can be due to changes in vertical wind shear or inner core structure. Because of this effect, forecasters use a longer term (6 to 24 hours) motion to help forecast tropical cyclones, which acts to smooth out such wobbles.
Forecast models
High-speed computers and sophisticated simulation software allow meteorologists to run computer models that forecast tropical cyclone tracks based on the future position and strength of high- and low-pressure systems. Combining forecast models with increased understanding of the forces that act on tropical cyclones, and a wealth of data from Earth-orbiting satellites and other sensors, scientists have increased the accuracy of track forecasts over recent decades. The addition of dropwindsonde missions around tropical cyclones in what are known as synoptic flow missions in the Atlantic Basin decreased track error by 15–20 percent. Using a consensus of forecast models, as well as ensemble members of the various models, can help reduce forecast error. However, regardless how small the average error becomes, large errors within the guidance are still possible. An accurate track forecast is important, because if the track forecast is incorrect, forecasts for intensity, rainfall, storm surge, and tornado threat will also be incorrect.
Length of forecast period
Forecasts within hurricane advisories were issued one day into the future in 1954 before being extended to two days into the future in 1961, and three days into the future in 1964. Starting in the mid to late 1990s, research into tropical cyclones and how forecast models handle the systems led to substantial improvements in track error. By 2001, the error had reduced sufficiently to extend track out to 5 days in the future on public advisories. In addition, at 1700 UTC during the hurricane season, a medium-range coordination call takes place between the Hydrometeorological Prediction Center and the National Hurricane Center to coordinate tropical cyclone placement on the medium-range pressure forecasts 6 and 7 days into the future for the northeast Pacific and Atlantic basins. Every so often, even at this time range, successful predictions can be made.
In forecasts, the National Hurricane Center uses a track forecast cone for the graphical representation of the uncertainty in its forecasts of a tropical cyclone's future location. The cone represents the probable position of a tropical cyclone's circulation center, and is made by drawing a set of circles centered at each forecast point—12, 24, 36, 48, and 72 hours for a three-day forecast, as well as 96 and 120 hours for a five-day forecast. The radius of each circle is equal to encompass two-thirds of the historical official forecast errors for the preceding five-year period. The cone is then constructed by drawing a tangent line that connects the outside boundary of all the circles. The National Hurricane Center states that the entire track of the tropical cyclone "can be expected to remain within the cone roughly 60–70% of the time."
| Physical sciences | Storms | Earth science |
11059653 | https://en.wikipedia.org/wiki/Sprite%20%28lightning%29 | Sprite (lightning) | Sprites or red sprites are large-scale electric discharges that occur in the mesosphere, high above thunderstorm clouds, or cumulonimbus, giving rise to a varied range of visual shapes flickering in the night sky. They are usually triggered by the discharges of positive lightning between an underlying thundercloud and the ground.
Precis
Sprites appear as luminous red-orange flashes. They often occur in clusters above the troposphere at an altitude range of . Sporadic visual reports of sprites go back at least to 1886. They were first photographed on July 4, 1989, by scientists from the University of Minnesota and have subsequently been captured in video recordings thousands of times.
Sprites are sometimes inaccurately called upper-atmospheric lightning. However, they are cold plasma phenomena that lack the hot channel temperatures of tropospheric lightning, so they are more akin to fluorescent tube discharges than to lightning discharges. Sprites are associated with various other upper-atmospheric optical phenomena including blue jets and ELVES.
History
The earliest known report is by Toynbee and Mackenzie in 1886. Nobel laureate C. T. R. Wilson had suggested in 1925, on theoretical grounds, that electrical breakdown could occur in the upper atmosphere, and in 1956 he witnessed what possibly could have been a sprite. They were first documented photographically on July 6, 1989, when scientists from the University of Minnesota, using a low-light video camera, accidentally captured the first image of what would subsequently become known as a sprite.
Several years after their discovery they were named sprites (air spirits) after their namesake mythological entity based on their elusive nature. Since the 1989 video capture, sprites have been imaged from the ground, from aircraft and from space, and have become the subject of intensive investigations. A featured high speed video that was captured by Thomas Ashcraft, Jacob L Harley, Matthew G McHarg, and Hans Nielsen in 2019 at about 100,000 frames per second is fast enough to provide better detailing of how sprites develop. However, according to NASA's APOD blog, despite being recorded in photographs and videos for the more than 30 years, the "root cause" of sprite lightning remains unknown, "apart from a general association with positive cloud-to-ground lightning." NASA also notes that not all storms exhibit sprite lightning.
In 2016, sprites were observed during Hurricane Matthew's passage through the Caribbean. The role of sprites in the tropical cyclones is presently unknown.
Characteristics
Sprites have been observed over North America, Central America, South America, Europe, Central Africa (Zaire), Australia, the Sea of Japan and Asia and are believed to occur during most large thunderstorm systems.
Rodger (1999) categorized three types of sprites based on their visual appearance.
Jellyfish sprite – very large, up to .
Column sprite (C-sprite) – large-scale electrical discharges above the earth that are still not totally understood.
Carrot sprite – a column sprite with long tendrils.
Sprites are colored reddish-orange in their upper regions, with bluish hanging tendrils below, and can be preceded by a reddish halo. They last longer than normal lower stratospheric discharges, which last typically a few milliseconds, and are usually triggered by the discharges of positive lightning between the thundercloud and the ground, although sprites generated by negative ground flashes have also been observed. They often occur in clusters of two or more, and typically span the altitude range , with what appear to be tendrils hanging below, and branches reaching above.
Optical imaging using a 10,000 frame-per-second high speed camera showed that sprites are actually clusters of small, decameter scale, () balls of ionization that are launched at an altitude of about and then move downward at speeds of up to ten percent the speed of light, followed a few milliseconds later by a separate set of upward moving balls of ionization. Sprites may be horizontally displaced by up to from the location of the underlying lightning strike, with a time delay following the lightning that is typically a few milliseconds, but on rare occasions may be up to 100 milliseconds.
In order to film sprites from Earth, special conditions must be present: of clear view to a powerful thunderstorm with positive lightning between cloud and ground, red-sensitive recording equipment, and a black unlit sky.
Mechanism
Sprites occur near the top of the mesosphere at about 80 km altitude in response to the electric field generated by lightning flashes in underlying thunderstorms. When a sufficiently large positive lightning strike carries charges to the ground, the cloud top is left with a strongly negative net charge. This can be modeled as a quasi-static electric dipole and for less than 10 milliseconds a strong electric field is generated in the region above the thunderstorm. In the low pressure of the upper mesosphere the breakdown voltage is drastically reduced, allowing for an electron avalanche to occur. Sprites get their characteristic red color from excitation of nitrogen in the low pressure environment of the upper mesosphere. At such low pressures quenching by atomic oxygen is much faster than that of nitrogen, allowing for nitrogen emissions to dominate despite no difference in composition.
Sprite halo
Sprites are sometimes preceded, by about 1 millisecond, by a sprite halo, a pancake-shaped region of weak, transient optical emissions approximately across and thick. The halo is centered at about altitude above the initiating lightning strike. These halos are thought to be produced by the same physical process that produces sprites, but for which the ionization is too weak to cross the threshold required for streamer formation. They are sometimes mistaken for ELVES, due to their visual similarity and short duration.
Research carried out at Stanford University in 2000 indicates that, unlike sprites with bright vertical columnar structure, occurrence of sprite halos is not unusual in association with normal (negative) lightning discharges.
Research in 2004 by scientists from Tohoku University found that very low frequency emissions occur at the same time as the sprite, indicating that a discharge within the cloud may generate the sprites.
Related aircraft damage
Sprites have been blamed for otherwise unexplained accidents involving high altitude vehicular operations above thunderstorms. One example of this is the malfunction of a NASA stratospheric balloon launched on June 6, 1989, from Palestine, Texas. The balloon suffered an uncommanded payload release while flying at over a thunderstorm near Graham, Texas. Months after the accident, an investigation concluded that a "bolt of lightning" traveling upward from the clouds provoked the incident. The attribution of the accident to a sprite was made retroactively, since this term was not coined until late 1993.
| Physical sciences | Storms | Earth science |
2254751 | https://en.wikipedia.org/wiki/LBV%201806%E2%88%9220 | LBV 1806−20 | LBV 1806−20 is a candidate luminous blue variable (LBV) and likely binary star located around from the Sun, towards the center of the Milky Way. It has an estimated mass of around 36 solar masses and an estimated variable luminosity of around two million times that of the Sun. It is highly luminous but is invisible from the Solar System at visual wavelengths because less than one billionth of its visible light reaches us.
When first discovered, LBV 1806−20 was considered both the most luminous and most massive star known, which challenged scientific understanding of the formation of massive stars. Recent estimates place it somewhat nearer to Earth, which when combined with its binary nature mean that it is now well within the expected range of parameters for extremely luminous stars in the galaxy. It is estimated at 2 million times as luminous as the sun which makes it one of the most luminous stars in the galaxy.
Location
LBV 1806−20 lies at the core of radio nebula G10.0–0.3, which is believed to be primarily powered by its stellar wind. It is a member of the 1806−20 open cluster, itself a component of W31, one of the largest H II regions in the Milky Way. Cluster 1806−20 is made up of some highly unusual stars, including four Wolf–Rayet stars, several OB stars, and a magnetar (SGR 1806−20).
Spectrum
The spectral type of LBV 1806−20 is uncertain and possibly variable. It has been constrained to between O9 and B2 on the basis of an infrared HeI line equivalent width. The spectrum shows strong emission in the Paschen and Brackett series of hydrogen, but also emission lines of helium, FeII, MgII, and NaI. The lines are broad and have uneven profiles, some showing P Cygni profiles. High resolution spectra show that some HeI absorption lines are doubled.
Properties
Intervening dust in the direction of the Galactic Center absorb an estimated 35 magnitudes at visual wavelengths, and so most observations are conducted using infrared telescopes. On the basis of its luminosity and spectral type it is suspected of being an LBV, but despite the name the characteristic photometric and spectroscopic variations have not yet been observed so it remains just a candidate.
Binary
To account for the doubled HeI lines in its spectrum and the inconsistent mass, luminosity and age estimates, LBV 1806-20 has been proposed to be a binary. The emission lines are single, so only one star appears to have a dense stellar wind as might be expected from an LBV.
| Physical sciences | Notable stars | Astronomy |
2255608 | https://en.wikipedia.org/wiki/Tephritidae | Tephritidae | The Tephritidae are one of two fly families referred to as fruit flies, the other family being the Drosophilidae. The family Tephritidae does not include the biological model organisms of the genus Drosophila (in the family Drosophilidae), which is often called the "common fruit fly". Nearly 5,000 described species of tephritid fruit fly are categorized in almost 500 genera of the Tephritidae. Description, recategorization, and genetic analyses are constantly changing the taxonomy of this family. To distinguish them from the Drosophilidae, the Tephritidae are sometimes called peacock flies, in reference to their elaborate and colorful markings. The name comes from the Greek τεφρος, tephros, meaning "ash grey". They are found in all the biogeographic realms.
Description
For terms see Morphology of Diptera and Tephritidae glossary
Tephritids are small to medium-sized (2.5–10 mm) flies that are often colourful, and usually with pictured wings, the subcostal vein curving forward at a right angle. The head is hemispherical and usually short. The face is vertical or retreating and the frons is broad. Ocelli and cellar bristles are present. The postvertical bristles are parallel to divergent. Two to eight pairs of frontal bristles are seen (at least one but usually several lower pairs curving inwards and at least one of the upper pairs curving backwards). In some species, the frontal bristles are inserted on a raised tubercle. Interfrontal setulae are usually absent or represented by one or two tiny setulae near the lunula. True vibrissae are absent, but several genera have strong bristles near the vibrissal angle. The wings usually have yellow, brown, or black markings or are dark-coloured with lighter markings. In a few species, the wings are clear. The costa has both a humeral and a subcostal break. The apical part of the subcostal is usually indistinct or even transparent and at about a right angle with respect to the basal part. Crossvein BM-Cu is present; the cell cup (posterior cubital cell or anal cell) is closed and nearly always narrowing to an acute angle. It is closed by a geniculated vein (CuA2). The CuA2 vein is rarely straight or convex. The tibiae lack a dorsal preapical bristle. The female has an oviscape.
The larva is amphipneustic (having only the anterior and posterior pairs of spiracle). The body varies from white to yellowish or brown. The posterior end of pale-coloured species is sometimes black. The body tapers at the anterior. The two mandibles sometimes have teeth along the ventral margin. The antennomaxillary lobes at each side of the mandibles have several transverse oral ridges or short laminae directed posteriorly. The anterior spiracles (prothoracic spiracles) end bluntly and are not elongated. Each has at least three openings or up to 50 arranged transversely in one to three groups or irregularly. Each posterior spiracle (anal spiracle) lacks a clearly defined peritreme and each has three spiracular openings (in mature larvae). These are usually more or less horizontal, parallel and usually bear branched spiracular hairs in four tufts.
Ecology
The larvae of almost all Tephritidae are phytophagous. Females deposit eggs in living, healthy plant tissue using their telescopic ovipositors. Here, the larvae find their food upon emerging. The larvae develop in leaves, stems, flowers, seeds, fruits, and roots of the host plant, depending on the species. Some species are gall-forming. One exception to the phytophagous lifestyle is Euphranta toxoneura (Loew) whose larvae develop in galls formed by sawflies. The adults sometimes have a very short lifespan. Some live for less than a week. Some species are monophagous (feeding on only one plant species) others are polyphagous (feeding on several, usually related plant species).
The behavioral ecology of tephritid fruit flies is of great interest to biologists. Some fruit flies have extensive mating rituals or territorial displays. Many are brightly colored and visually showy. Some fruit flies show Batesian mimicry, bearing the colors and markings of dangerous arthropods such as wasps or jumping spiders because it helps the fruit flies avoid predation, though the flies lack stingers.
Adult tephritid fruit flies are often found on the host plant and feeding on pollen, nectar, rotting plant debris, or honeydew.
Natural enemies include parasitoid wasps of the genera Diapriidae and Braconidae.
Economic importance
Tephritid fruit flies are of major economic importance in agriculture. Some have negative effects, some positive. Various species of fruit flies cause damage to fruit and other plant crops. The genus Bactrocera is of worldwide notoriety for its destructive impact on agriculture. The olive fruit fly (B. oleae), for example, feeds on only one plant: the wild or commercially cultivated olive, Olea europaea. It has the capacity to ruin 100% of an olive crop by damaging the fruit. Bactrocera dorsalis is another highly invasive pest species that damages tropical fruit, vegetable, and nut crops. Euleia heraclei is a pest of celery and parsnips. The genus Anastrepha includes several important pests, notably A. grandis, A. ludens (Mexican fruit fly), A. obliqua, and A. suspensa. Other pests are Strauzia longipennis, a pest of sunflowers and Rhagoletis mendax, a pest of blueberries. Another notorious agricultural pest is the Mediterranean fruit fly or Medfly, Ceratitis capitata, which is responsible for millions of dollars' worth in expenses by countries for control and eradication efforts, in addition to costs of damage to fruit crops. Similarly, the Queensland fruit fly (Bactrocera tryoni) is responsible for more than $28.5 million in damage to Australian fruit crops a year. This species lays eggs in a wide variety of unripe fruit hosts, causing them to rot prior to ripening.
Some fruit flies are used as agents of biological control, thereby reducing the populations of pest species. Several species of the genus Urophora are used as control agents against rangeland-destroying noxious weeds such as starthistles and knapweeds, but their effectiveness is questionable. Urophora sirunaseva produces larvae that pupate within a woody gall within the flower and disrupt seed production. Chaetorellia acrolophi is an effective biocontrol agent against knapweeds Chaetorellia australis and Chaetorellia succinea, deposit eggs into the starthistle seedheads, where their larvae consume the seeds and flower ovaries.
Since economically important tephritid fruit flies exist worldwide, vast networks of researchers, several international symposia, and intensive activities on various subjects extend from ecology to molecular biology (Tephritid Workers Database).
Pest management techniques applied to tephritid include the use of cover sprays with conventional pesticides, however, due to deleterious impact of these pesticides, new, less impactful and more targeted pest control techniques have been used, such as toxic food baits, male annihilation technique using specific male attractant parapheromones in toxic baits or mass trapping, or even sterile insect technique as part of integrated pest management.
Systematics
Tephritidae is divided into several subfamilies:
Blepharoneurinae (5 genera, 34 species)
Dacinae (41 genera, 1066 species)
Phytalmiinae (95 genera, 331 species)
Tachiniscinae (8 genera, 18 species)
Tephritinae (211 genera, 1859 species)
Trypetinae (118 genera, 1012 species)
The genera Oxyphora, Pseudorellia, and Stylia comprise 32 species, and are not included in any subfamily (incertae sedis).
Identification
Richard H. Foote, P. L. Blanc, Allen L. Norrbom, 1993 Handbook of the Fruit Flies (Diptera: Tephritidae) of America North of Mexico Cornell University Press (Comstock Publishing).
Merz, B. 1994. Diptera Tephritidae. Insecta Helvetica Fauna 10: 1-198.
White, I.M. 1988. Tephritid flies. Diptera: Tephritidae.
White I.M. & Elson-Harris M.M. 1994 Fruit Flies of Economic Significance: their Identification and Bionomics. 2nd ed. International Institute of Entomology, London.
R.A.I. Drew and Meredith C Romig Tropical Fruit Flies of South-East Asia (Tephritidae: Dacinae) CABI
Hendel1914. Die Gattungen der Bohrfliegen. Wein. Entomol. Ztg. 33: 73–98. Keys to World genera Out of date but still the only world monograph.
Hendel, F., 1927. Trypetidae.In: Lindner, E. (Ed.). Die Fliegen der Paläarktischen Region 5, 49, 1-221. Keys to Palaearctic species but now needs revision (in German).
Séguy, E. (1934) Diptères: Brachycères. II. Muscidae acalypterae, Scatophagidae. Paris: Éditions Faune de France 28. virtuelle numérique
Rikhter, V.A. Family Conopidae in Bei-Bienko, G. Ya, 1988 Keys to the insects of the European Part of the USSR Volume 5 (Diptera) Part 2 English edition. Keys to Palaearctic species but now needs revision.
Species lists
West Palaearctic including Russia
Australasian/Oceanian
Nearctic
Japan
World list
Gallery
| Biology and health sciences | Flies (Diptera) | Animals |
2256307 | https://en.wikipedia.org/wiki/European%20spadefoot%20toad | European spadefoot toad | The European spadefoot toads are a family of frogs, the Pelobatidae, with only one extant genus Pelobates, containing six species. They are native to Europe, the Mediterranean, northwestern Africa, and western Asia.
Description
The European spadefoot toad grows up to in length and is often inconspicuously coloured. They have squat bodies with smooth skin and eyes with vertical pupils. They are predominantly fossorial (burrowing) frogs, which dig into sandy soils. Pelobatidae frogs burrow backwards and they spend much of their time in the ground. They prefer open areas with loose soil as opposed to dense compact soil to facilitate the burrowing and have hardened protrusions on their feet to aid in digging, which is the source of the common name. They emerge from the ground during periods of rain and breed in pools, which are usually temporary.
All of the species from this family have free-living, aquatic tadpoles. The eggs are laid in temporary ponds that may quickly evaporate, so the tadpole stage is unusually brief, with rapid development to the adult form in as little as two weeks. To further speed their growth, some of the tadpoles are cannibalistic, eating their brood-mates to increase their supply of protein.
Taxonomy
The seven species of American spadefoot toads (genera Scaphiopus and Spea) were previously also included in the family Pelobatidae, but are now generally regarded as the separate family Scaphiopodidae.
Family Pelobatidae
Genus †Elkobatrachus
†Elkobatrachus brocki
Genus †Liaobatrachus
Genus †Eopelobates
†Eopelobates anthracinus
†Eopelobates bayeri
†Eopelobates hinschei
†Eopelobates wagneri
Genus Pelobates
Pelobates balcanicus
Western spadefoot toad (Pelobates cultripes)
Common spadefoot (Pelobates fuscus)
Pelobates syriacus (Pelobates syriacus)
Moroccan spadefoot toad (Pelobates varaldii)
Pallas' Spadefoot Toad (Pelobates vespertinus)
Fossils
The earliest fossil genus of pelobatids, Elkobatrachus, was described in 2006.
In the Jurassic Morrison Formation, pelobatids are represented by the ilium of an unnamed but indeterminate species. This ilium is larger than that of Enneabatrachus, a contemporary discoglossid species. A specimen has been recovered from Quarry 9 of Como Bluff in Wyoming. Pelobatids are present in stratigraphic zones 5 and 6 of the formation.
The Oligocene site of Enspel in Germany preserves evidence of pelobatid tadpoles feeding on pollen.
| Biology and health sciences | Frogs and toads | Animals |
2256337 | https://en.wikipedia.org/wiki/Radio%20spectrum%20pollution | Radio spectrum pollution | Radio spectrum pollution is the straying of waves in the radio and electromagnetic spectrums outside their allocations that cause problems for some activities. It is of particular concern to radio astronomers.
Radio spectrum pollution is mitigated by effective spectrum management. Within the United States, the Communications Act of 1934 grants authority for spectrum management to the President for all federal use (47 U.S.C. 305). The National Telecommunications and Information Administration (NTIA) manages the spectrum for the Federal Government. Its rules are found in the "NTIA Manual of Regulations and Procedures for Federal Radio Frequency Management". The Federal Communications Commission (FCC) manages and regulates all domestic non-federal spectrum use (47 U.S.C. 301). Each country typically has its own spectrum regulatory organization. Internationally, the International Telecommunication Union (ITU) coordinates spectrum policy.
| Physical sciences | Radio astronomy | Astronomy |
2259059 | https://en.wikipedia.org/wiki/Chronometry | Chronometry | Chronometry or horology () is the science studying the measurement of time and timekeeping. Chronometry enables the establishment of standard measurements of time, which have applications in a broad range of social and scientific areas. Horology usually refers specifically to the study of mechanical timekeeping devices, while chronometry is broader in scope, also including biological behaviours with respect to time (biochronometry), as well as the dating of geological material (geochronometry).
Horology is commonly used specifically with reference to the mechanical instruments created to keep time: clocks, watches, clockwork, sundials, hourglasses, clepsydras, timers, time recorders, marine chronometers, and atomic clocks are all examples of instruments used to measure time. People interested in horology are called horologists. That term is used both by people who deal professionally with timekeeping apparatuses, as well as enthusiasts and scholars of horology. Horology and horologists have numerous organizations, both professional associations and more scholarly societies. The largest horological membership organisation globally is the NAWCC, the National Association of Watch and Clock Collectors, which is US based, but also has local chapters elsewhere.
Records of timekeeping are attested during the Paleolithic, in the form of inscriptions made to mark the passing of lunar cycles and measure years. Written calendars were then invented, followed by mechanical devices. The highest levels of precision are presently achieved by atomic clocks, which are used to track the international standard second.
Etymology
Chronometry is derived from two root words, chronos and metron (χρόνος and μέτρον in Ancient Greek respectively), with rough meanings of "time" and "measure". The combination of the two is taken to mean time measuring.
In the Ancient Greek lexicon, meanings and translations differ depending on the source. Chronos, used in relation to time when in definite periods, and linked to dates in time, chronological accuracy, and sometimes in rare cases, refers to a delay. The length of the time it refers ranges from seconds to seasons of the year to lifetimes, it can also concern periods of time wherein some specific event takes place, or persists, or is delayed.
The root word is correlated with the god Chronos in Ancient Greek mythology, who embodied the image of time, originated from out of the primordial chaos. Known as the one who spins the Zodiac Wheel, further evidence of his connection to the progression of time. However, Ancient Greek makes a distinction between two types of time, chronos, the static and continuing progress of present to future, time in a sequential and chronological sense, and Kairos, a concept based in a more abstract sense, representing the opportune moment for action or change to occur.
Kairos (καιρός) carries little emphasis on precise chronology, instead being used as a time specifically fit for something, or also a period of time characterised by some aspect of crisis, also relating to the endtime. It can as well be seen in the light of an advantage, profit, or fruit of a thing, but has also been represented in apocalyptic feeling, and likewise shown as variable between misfortune and success, being likened to a body part vulnerable due to a gap in armor for Homer, benefit or calamity depending on the perspective. It is also referenced in Christian theology, being used as implication of God's action and judgement in circumstances.
Because of the inherent relation between chronos and kairos, their function the Ancient Greek's portrayal and concept of time, understanding one means understanding the other in part. The implication of chronos, an indifferent disposition and eternal essence lies at the core of the science of chronometry, bias is avoided, and definite measurement is favoured.
Subfields
Biochronometry
Biochronometry (also chronobiology or biological chronometry) is the study of biological behaviours and patterns seen in animals with factors based in time. It can be categorised into Circadian rhythms and Circannual cycles. Examples of these behaviours can be: the relation of daily and seasonal tidal cues to the activity of marine plants and animals, the photosynthetic capacity and phototactic responsiveness in algae, or metabolic temperature compensation in bacteria.
Circadian rhythms of various species can be observed through their gross motor function throughout the course of a day. These patterns are more apparent with the day further categorised into activity and rest times. Investigation into a species is conducted through comparisons of free-running and entrained rhythms, where the former is attained from within the species' natural environment and the latter from a subject that has been taught certain behaviours. Circannual rhythms are alike but pertain to patterns within the scale of a year, patterns like migration, moulting, reproduction, and body weight are common examples, research and investigation are achieved with similar methods to circadian patterns.
Circadian and circannual rhythms can be seen in all organisms, in both single and multi-celled organisms. A sub-branch of biochronometry is microbiochronometry (also chronomicrobiology or microbiological chronometry), and is the examination of behavioural sequences and cycles within micro-organisms. Adapting to circadian and circannual rhythms is an essential evolution for living organisms, these studies, as well as educating on the adaptations of organisms also bring to light certain factors affecting many of species' and organisms' responses, and can also be applied to further understand the overall physiology, this can be for humans as well, examples include: factors of human performance, sleep, metabolism, and disease development, which are all connected to biochronometrical cycles.
Mental chronometry
Mental chronometry (also called cognitive chronometry) studies human information processing mechanisms, namely reaction time and perception. As well as a field of chronometry, it also forms a part of cognitive psychology and its contemporary human information processing approach. Research comprises applications of the chronometric paradigms – many of which are related to classical reaction time paradigms from psychophysiology – through measuring reaction times of subjects with varied methods, and contribute to studies in cognition and action. Reaction time models and the process of expressing the temporostructural organisation of human processing mechanisms have an innate computational essence to them. It has been argued that because of this, conceptual frameworks of cognitive psychology cannot be integrated in their typical fashions.
One common method is the use of event-related potentials (ERPs) in stimulus-response experiments. These are fluctuations of generated transient voltages in neural tissues that occur in response to a stimulus event either immediately before or after. This testing emphasises the mental events' time-course and nature and assists in determining the structural functions in human information processing.
Geochronometry
The dating of geological materials makes up the field of geochronometry, and falls within areas of geochronology and stratigraphy, while differing itself from chronostratigraphy. The geochronometric scale is periodic, its units working in powers of 1000, and is based in units of duration, contrasting with the chronostratigraphic scale. The distinctions between the two scales have caused some confusion – even among academic communities.
Geochronometry deals with calculating a precise date of rock sediments and other geological events, giving an idea as to what the history of various areas is, for example, volcanic and magmatic movements and occurrences can be easily recognised, as well as marine deposits, which can be indicators for marine events and even global environmental changes. This dating can be done in a number of ways. All dependable methods – barring the exceptions of thermoluminescence, radioluminescence and ESR (electron spin resonance) dating – are based in radioactive decay, focusing on the degradation of the radioactive parent nuclide and the corresponding daughter product's growth.
By measuring the daughter isotopes in a specific sample its age can be calculated. The preserved conformity of parent and daughter nuclides provides the basis for the radioactive dating of geochronometry, applying the Rutherford Soddy Law of Radioactivity, specifically using the concept of radioactive transformation in the growth of the daughter nuclide.
Thermoluminescence is an extremely useful concept to apply, being used in a diverse amount of areas in science, dating using thermoluminescence is a cheap and convenient method for geochronometry. Thermoluminescence is the production of light from a heated insulator and semi-conductor, it is occasionally confused with incandescent light emissions of a material, a different process despite the many similarities. However, this only occurs if the material has had previous exposure to and absorption of energy from radiation. Importantly, the light emissions of thermoluminescence cannot be repeated. The entire process, from the material's exposure to radiation would have to be repeated to generate another thermoluminescence emission. The age of a material can be determined by measuring the amount of light given off during the heating process, by means of a phototube, as the emission is proportional to the dose of radiation the material absorbed.
Time metrology
Time metrology or time and frequency metrology is the application of metrology for timekeeping, including frequency stability.
Its main tasks are the realization of the second as the SI unit of measurement for time and the establishment of time standards and frequency standards as well as their dissemination.
History
Early humans would have used their basic senses to perceive the time of day, and relied on their biological sense of time to discern the seasons in order to act accordingly. Their physiological and behavioural seasonal cycles mainly being influenced by a melatonin based photoperiod time measurement biological system – which measures the change in daylight within the annual cycle, giving a sense of the time in the year – and their circannual rhythms, providing an anticipation of environmental events months beforehand to increase chances of survival.
There is debate over when the earliest use of lunar calendars was, and over whether some findings constituted as a lunar calendar. Most related findings and materials from the palaeolithic era are fashioned from bones and stone, with various markings from tools. These markings are thought to not have been the result of marks to represent the lunar cycles but non-notational and irregular engravings, a pattern of latter subsidiary marks that disregard the previous design is indicative of the markings being the use of motifs and ritual marking instead.
However, as humans' focus turned to farming the importance and reliance on understanding the rhythms and cycle of the seasons grew, and the unreliability of lunar phases became problematic. An early human accustomed to the phases of the moon would use them as a rule of thumb, and the potential for weather to interfere with reading the cycle further degraded the reliability. The length of a moon is on average less than our current month, not acting as a dependable alternate, so as years progress the room of error between would grow until some other indicator would give indication.
The Ancient Egyptian calendars were among the first calendars made, and the civil calendar even endured for a long period afterwards, surviving past even its culture's collapse and through the early Christian era. It has been assumed to have been invented near 4231 BC by some, but accurate and exact dating is difficult in its era and the invention has been attributed to 3200 BC, when the first historical king of Egypt, Menes, united Upper and Lower Egypt. It was originally based on cycles and phases of the moon, however, Egyptians later realised the calendar was flawed upon noticing the star Sirius rose before sunrise every 365 days, a year as we know it now, and was remade to consist of twelve months of thirty days, with five epagomenal days. The former is referred to as the Ancient Egyptians' lunar calendar, and the latter the civil calendar.
Early calendars often hold an element of their respective culture's traditions and values, for example, the five day intercalary month of the Ancient Egyptian's civil calendar representing the birthdays of the gods Horus, Isis, Set, Osiris and Nephthys. Maya use of a zero date as well as the Tzolkʼin's connection to their thirteen layers of heaven (the product of it and all the human digits, twenty, making the 260-day year of the year) and the length of time between conception and birth in pregnancy.
Museums and libraries
Europe
There are many horology museums and several specialized libraries devoted to the subject. One example is the Royal Greenwich Observatory, which is also the source of the Prime Meridian and the home of the first marine timekeepers accurate enough to determine longitude (made by John Harrison). Other horological museums in the London area include the Clockmakers' Museum, which re-opened at the Science Museum in October 2015, the horological collections at the British Museum, the Science Museum (London), and the Wallace Collection. The Guildhall Library in London contains an extensive public collection on horology. In Upton, also in the United Kingdom, at the headquarters of the British Horological Institute, there is the Museum of Timekeeping. A more specialised museum of horology in the United Kingdom is the Cuckooland Museum in Cheshire, which hosts the world's largest collection of antique cuckoo clocks.
One of the more comprehensive museums dedicated to horology is the Musée international d'horlogerie, in La Chaux-de-Fonds in Switzerland, which contains a public library of horology. The Musée d'Horlogerie du Locle is smaller but located nearby. Other good horological libraries providing public access are at the Musée international d'horlogerie in Switzerland, at La Chaux-de-Fonds, and at Le Locle.
In France, Besançon has the Musée du Temps (Museum of Time) in the historic Palais Grenvelle. In Serpa and Évora, in Portugal, there is the Museu do Relógio. In Germany, there is the Deutsches Uhrenmuseum in Furtwangen im Schwarzwald, in the Black Forest, which contains a public library of horology.
North America
The two leading specialised horological museums in North America are the National Watch and Clock Museum in Columbia, Pennsylvania, and the American Clock and Watch Museum in Bristol, Connecticut. Another museum dedicated to clocks is the Willard House and Clock Museum in Grafton, Massachusetts. One of the most comprehensive horological libraries open to the public is the National Watch and Clock Library in Columbia, Pennsylvania.
Organizations
Notable scholarly horological organizations include:
American Watchmakers-Clockmakers Institute – AWCI (United States of America)
Antiquarian Horological Society – AHS (United Kingdom)
British Horological Institute – BHI (United Kingdom)
Chronometrophilia (Switzerland)
Deutsche Gesellschaft für Chronometrie – DGC (Germany)
Horological Society of New York – HSNY (United States of America)
National Association of Watch and Clock Collectors – NAWCC (United States of America)
UK Horology - UK Clock & Watch Company based in Bristol
Glossary
| Physical sciences | Basics | Basics and measurement |
16301990 | https://en.wikipedia.org/wiki/User%20%28computing%29 | User (computing) | A user is a person who utilizes a computer or network service.
A user often has a user account and is identified to the system by a username (or user name).
Some software products provide services to other systems and have no direct end users.
End user
End users are the ultimate human users (also referred to as operators) of a software product. The end user stands in contrast to users who support or maintain the product such as sysops, database administrators and computer technicians. The term is used to abstract and distinguish those who only use the software from the developers of the system, who enhance the software for end users. In user-centered design, it also distinguishes the software operator from the client who pays for its development and other stakeholders who may not directly use the software, but help establish its requirements. This abstraction is primarily useful in designing the user interface, and refers to a relevant subset of characteristics that most expected users would have in common.
In user-centered design, personas are created to represent the types of users. It is sometimes specified for each persona which types of user interfaces it is comfortable with (due to previous experience or the interface's inherent simplicity), and what technical expertise and degree of knowledge it has in specific fields or disciplines. When few constraints are imposed on the end-user category, especially when designing programs for use by the general public, it is common practice to expect minimal technical expertise or previous training in end users.
The end-user development discipline blurs the typical distinction between users and developers. It designates activities or techniques in which people who are not professional developers create automated behavior and complex data objects without significant knowledge of a programming language.
Systems whose actor is another system or a software agent have no direct end users.
User account
A user's account allows a user to authenticate to a system and potentially to receive authorization to access resources provided by or connected to that system; however, authentication does not imply authorization. To log into an account, a user is typically required to authenticate oneself with a password or other credentials for the purposes of accounting, security, logging, and resource management.
Once the user has logged on, the operating system will often use an identifier such as an integer to refer to them, rather than their username, through a process known as identity correlation. In Unix systems, the username is correlated with a user identifier or user ID.
Computer systems operate in one of two types based on what kind of users they have:
Single-user systems do not have a concept of several user accounts.
Multi-user systems have such a concept, and require users to identify themselves before using the system.
Each user account on a multi-user system typically has a home directory, in which to store files pertaining exclusively to that user's activities, which is protected from access by other users (though a system administrator may have access). User accounts often contain a public user profile, which contains basic information provided by the account's owner. The files stored in the home directory (and all other directories in the system) have file system permissions which are inspected by the operating system to determine which users are granted access to read or execute a file, or to store a new file in that directory.
While systems expect most user accounts to be used by only a single person, many systems have a special account intended to allow anyone to use the system, such as the username "anonymous" for anonymous FTP and the username "guest" for a guest account.
Password storage
On Unix systems, local user accounts are stored in the file /etc/passwd, while user passwords may be stored at /etc/shadow in its hashed form.
On Microsoft Windows, user passwords can be managed within the Credential Manager program. The passwords are located in the Windows profile directory.
Username format
Various computer operating-systems and applications expect/enforce different rules for the format.
In Microsoft Windows environments, for example, note the potential use of:
User Principal Name (UPN) format – for example: UserName@Example.com
Down-Level Logon Name format – for example: DOMAIN\UserName
Terminology
Some usability professionals have expressed their dislike of the term "user" and have proposed changing it. Don Norman stated that "One of the horrible words we use is 'users'. I am on a crusade to get rid of the word 'users'. I would prefer to call them 'people'."
The term "user" may imply lack of the technical expertise required to fully understand how computer systems and software products work. Power users use advanced features of programs, though they are not necessarily capable of computer programming and system administration.
| Technology | Computer security | null |
16302780 | https://en.wikipedia.org/wiki/Chrysaora%20fuscescens | Chrysaora fuscescens | Chrysaora fuscescens, the Pacific sea nettle or West Coast sea nettle, is a widespread planktonic scyphozoan cnidarian—or medusa, "jellyfish" or "jelly"—that lives in the northeastern Pacific Ocean, in temperate to cooler waters off of British Columbia and the West Coast of the United States, ranging south to México. The Pacific sea nettle earned its common name in-reference to its defensive, 'nettle'-like sting; much like the stinging nettle plant (Urtica dioica), the sea nettle's defensive sting is often irritating (possibly mildly painful) to humans, though rarely dangerous.
The Pacific sea nettle has a distinctive, golden-brown bell—the main functioning 'body' or 'head' of a jelly—with a reddish tint. The bell can grow to be larger than one meter (3’) in diameter in the wild; however, most are less than 50 cm across. The long and spiraling, whitish oral arms (and 24 undulating, maroon tentacles) may trail behind the nettle as far as 15 feet (4.6 m).
Since about the mid-20th century, C. fuscescens has proven to be a very popular cnidarian to feature at aquariums (and even some zoos with aquatic exhibits), mainly due to the public's fascination with their bright colors and extremely long tentacles. Additionally, the species is known for being quite low-maintenance in captivity, when provided with the appropriate water parameters and conditions. When these medusae are actively thriving under ideal conditions, they can even be easily bred via the culturing of polyps.
Taxonomy
Johann Friedrich von Brandt described this species in 1835. The origin of the genus name Chrysaora lies in Greek mythology, with Chrysaor, brother of Pegasus, the son of Poseidon and Medusa. Translated, Chrysaor is Greek for "he who has a golden armament", in reference to the goldenrod color of the nettle's bell. The species name, fuscescens, is Latin for "dark into light".
Distribution and habitat
Chrysaora fuscescens is commonly found along the coasts of southern British Columbia, Washington, Oregon and most of California to Baja California Sur, México. Some sea nettles will range further north to the Gulf of Alaska, or west to Japan, and rarely into the Gulf of California. The populations reach their peak during the late summer. In recent years, C. fuscescens has become overly abundant off the coast of Oregon, which is thought to be an indicator of climate change. However, others suspect that the population is increasing because of human influences to coastal regions. Industrial runoff to the ocean, as well as agricultural waste and other forms of human pollution (such as fertilizer and chemical plants), add considerable amounts of nutrients to the water when dumped into the ocean. This then feeds microorganisms and helps to fuel algal blooms, which subsequently fuels the entire food chain and potentially provides the nettles with enough food to see a population increase.
Feeding and predators
In common with other cnidaria, Chrysaora fuscescens are carnivorous animals. They catch their prey by means of cnidocyst (or nematocyst) -laden tentacles that hang down in the water. The toxins in their nematocysts are effective against both their prey and humans, though it is typically nonlethal to the latter. Because C. fuscescens cannot chase after their prey, they must eat as they drift. By spreading out their tentacles like a large net, the sea nettle is able to catch food as it passes by. When prey brushes up against the tentacles, thousands of nematocysts are released, launching barbed stingers which release a paralyzing toxin into the quarry. The oral arms begin digestion as they transport the prey into the sea nettle's mouth.
C. fuscescens feeds on a wide variety of zooplankton, crustaceans, salps, pelagic snails, small fish as well as their eggs and larvae, and other jellyfish. Due to their growing numbers, they seem to be reducing fish populations and have become nuisances to the fishermen of Oregon by clogging up fishing nets. Their dense swarms have also become problematic for scientific trawls and water intake.
Despite having a potent sting, some animals, apparently, are not bothered or affected by the defense mechanism at all; C. fuscescens often falls prey to many marine birds, large fish and some cetaceans, and is especially relished by leatherback turtles.
Physiology
Chrysaora fuscescens swim using jet propulsion by squeezing their bell and pushing water behind them, allowing them to swim against currents, although most of the time they prefer to simply float. Sometimes they pick up hitchhikers, including small fish and crabs, which hide inside the sea nettle's bell and may feed on it.
The Chrysaora fuscescens use light sensing organs called ocelli to migrate from the deeper waters of the ocean to the surface.
Reproduction
Chrysaora fuscescens is capable of both sexual reproduction in the medusa stage and asexual reproduction in the polyp stage. The life cycle of C. fuscescens begins when females catch sperm released by the males to fertilize the eggs she has produced and is holding in her mouth. These fertilized eggs remain attached to her oral arms, and there they grow into flat bean-shaped planula. Once they grow into flower-shaped polyps, they are released into the ocean where they attach themselves to a solid surface and undergo asexual reproduction. The polyp makes identical copies of itself by means of budding, where the new polyp grows from its side. After the new polyp is fully formed, it too is released into the ocean and undergoes metamorphosis as it grows, developing a bell, arms, and tentacles until it is a fully formed medusa.
| Biology and health sciences | Cnidarians | Animals |
58550 | https://en.wikipedia.org/wiki/Thymine | Thymine | Thymine () (symbol T or Thy) is one of the four nucleotide bases in the nucleic acid of DNA that are represented by the letters G–C–A–T. The others are adenine, guanine, and cytosine. Thymine is also known as 5-methyluracil, a pyrimidine nucleobase. In RNA, thymine is replaced by the nucleobase uracil. Thymine was first isolated in 1893 by Albrecht Kossel and Albert Neumann from calf thymus glands, hence its name.
Derivation
As its alternate name (5-methyluracil) suggests, thymine may be derived by methylation of uracil at the 5th carbon. In RNA, thymine is replaced with uracil in most cases. In DNA, thymine (T) binds to adenine (A) via two hydrogen bonds, thereby stabilizing the nucleic acid structures.
Thymine combined with deoxyribose creates the nucleoside deoxythymidine, which is synonymous with the term thymidine. Thymidine can be phosphorylated with up to three phosphoric acid groups, producing dTMP (deoxythymidine monophosphate), dTDP, or dTTP (for the di- and tri- phosphates, respectively).
One of the common mutations of DNA involves two adjacent thymines or cytosine, which, in presence of ultraviolet light, may form thymine dimers, causing "kinks" in the DNA molecule that inhibit normal function.
Thymine could also be a target for actions of 5-fluorouracil (5-FU) in cancer treatment. 5-FU can be a metabolic analog of thymine (in DNA synthesis) or uracil (in RNA synthesis). Substitution of this analog inhibits DNA synthesis in actively dividing cells.
Thymine bases are frequently oxidized to hydantoins over time after the death of an organism.
Thymine imbalance causes mutation
During growth of bacteriophage T4, an imbalance of thymine availability, either a deficiency or an excess of thymine, causes increased mutation. The mutations caused by thymine deficiency appear to occur only at AT base pair sites in DNA and are often AT to GC transition mutations. In the bacterium Escherichia coli, thymine deficiency was also found to be mutagenic and cause AT to GC transitions.
Theoretical aspects
In March 2015, NASA scientists reported that, for the first time, complex DNA and RNA organic compounds of life, including uracil, cytosine and thymine, have been formed in the laboratory under outer space conditions, using starting chemicals, such as pyrimidine, found in meteorites. Pyrimidine, like polycyclic aromatic hydrocarbons (PAHs), another carbon-rich compound, may have been formed in red giants or in interstellar dust and gas clouds, according to the scientists. Thymine has not been found in meteorites, which suggests the first strands of DNA had to look elsewhere to obtain this building block. Thymine likely formed within some meteorite parent bodies, but may not have persisted within these bodies due to an oxidation reaction with hydrogen peroxide.
Synthesis
Laboratory synthesis
Thymine was first prepared by hydrolysis of the corresponding nucleoside obtained from natural sources. Interest in its direct chemical synthesis began in the early 1900s: Emil Fischer published a method starting from urea but a more practical synthesis used methylisothiourea in a condensation reaction with ethyl formyl propionate, followed by hydrolysis of the pyrimidine intermediate:
Many other preparative methods have been developed, including optimised conditions so that urea can be used directly in the reaction shown above, preferably with methyl formyl propionate.
| Biology and health sciences | Nucleic acids | Biology |
58610 | https://en.wikipedia.org/wiki/Non-Euclidean%20geometry | Non-Euclidean geometry | In mathematics, non-Euclidean geometry consists of two geometries based on axioms closely related to those that specify Euclidean geometry. As Euclidean geometry lies at the intersection of metric geometry and affine geometry, non-Euclidean geometry arises by either replacing the parallel postulate with an alternative, or relaxing the metric requirement. In the former case, one obtains hyperbolic geometry and elliptic geometry, the traditional non-Euclidean geometries. When the metric requirement is relaxed, then there are affine planes associated with the planar algebras, which give rise to kinematic geometries that have also been called non-Euclidean geometry.
Principles
The essential difference between the metric geometries is the nature of parallel lines. Euclid's fifth postulate, the parallel postulate, is equivalent to Playfair's postulate, which states that, within a two-dimensional plane, for any given line and a point A, which is not on , there is exactly one line through A that does not intersect . In hyperbolic geometry, by contrast, there are infinitely many lines through A not intersecting , while in elliptic geometry, any line through A intersects .
Another way to describe the differences between these geometries is to consider two straight lines indefinitely extended in a two-dimensional plane that are both perpendicular to a third line (in the same plane):
In Euclidean geometry, the lines remain at a constant distance from each other (meaning that a line drawn perpendicular to one line at any point will intersect the other line and the length of the line segment joining the points of intersection remains constant) and are known as parallels.
In hyperbolic geometry, they diverge from each other, increasing in distance as one moves further from the points of intersection with the common perpendicular; these lines are often called ultraparallels.
In elliptic geometry, the lines converge toward each other and intersect.
History
Background
Euclidean geometry, named after the Greek mathematician Euclid, includes some of the oldest known mathematics, and geometries that deviated from this were not widely accepted as legitimate until the 19th century.
The debate that eventually led to the discovery of the non-Euclidean geometries began almost as soon as Euclid wrote Elements. In the Elements, Euclid begins with a limited number of assumptions (23 definitions, five common notions, and five postulates) and seeks to prove all the other results (propositions) in the work. The most notorious of the postulates is often referred to as "Euclid's Fifth Postulate", or simply the parallel postulate, which in Euclid's original formulation is:
If a straight line falls on two straight lines in such a manner that the interior angles on the same side are together less than two right angles, then the straight lines, if produced indefinitely, meet on that side on which are the angles less than the two right angles.
Other mathematicians have devised simpler forms of this property. Regardless of the form of the postulate, however, it consistently appears more complicated than Euclid's other postulates:
To draw a straight line from any point to any point.
To produce [extend] a finite straight line continuously in a straight line.
To describe a circle with any centre and distance [radius].
That all right angles are equal to one another.
For at least a thousand years, geometers were troubled by the disparate complexity of the fifth postulate, and believed it could be proved as a theorem from the other four. Many attempted to find a proof by contradiction, including Ibn al-Haytham (Alhazen, 11th century), Omar Khayyám (12th century), Nasīr al-Dīn al-Tūsī (13th century), and Giovanni Girolamo Saccheri (18th century).
The theorems of Ibn al-Haytham, Khayyam and al-Tusi on quadrilaterals, including the Lambert quadrilateral and Saccheri quadrilateral, were "the first few theorems of the hyperbolic and the elliptic geometries". These theorems along with their alternative postulates, such as Playfair's axiom, played an important role in the later development of non-Euclidean geometry. These early attempts at challenging the fifth postulate had a considerable influence on its development among later European geometers, including Witelo, Levi ben Gerson, Alfonso, John Wallis and Saccheri. All of these early attempts made at trying to formulate non-Euclidean geometry, however, provided flawed proofs of the parallel postulate, depending on assumptions that are now recognized as essentially equivalent to the parallel postulate. These early attempts did, however, provide some early properties of the hyperbolic and elliptic geometries.
Khayyam, for example, tried to derive it from an equivalent postulate he formulated from "the principles of the Philosopher" (Aristotle): "Two convergent straight lines intersect and it is impossible for two convergent straight lines to diverge in the direction in which they converge." Khayyam then considered the three cases right, obtuse, and acute that the summit angles of a Saccheri quadrilateral can take and after proving a number of theorems about them, he correctly refuted the obtuse and acute cases based on his postulate and hence derived the classic postulate of Euclid, which he didn't realize was equivalent to his own postulate. Another example is al-Tusi's son, Sadr al-Din (sometimes known as "Pseudo-Tusi"), who wrote a book on the subject in 1298, based on al-Tusi's later thoughts, which presented another hypothesis equivalent to the parallel postulate. "He essentially revised both the Euclidean system of axioms and postulates and the proofs of many propositions from the Elements." His work was published in Rome in 1594 and was studied by European geometers, including Saccheri who criticised this work as well as that of Wallis.
Giordano Vitale, in his book Euclide restituo (1680, 1686), used the Saccheri quadrilateral to prove that if three points are equidistant on the base AB and the summit CD, then AB and CD are everywhere equidistant.
In a work titled Euclides ab Omni Naevo Vindicatus (Euclid Freed from All Flaws), published in 1733, Saccheri quickly discarded elliptic geometry as a possibility (some others of Euclid's axioms must be modified for elliptic geometry to work) and set to work proving a great number of results in hyperbolic geometry.
He finally reached a point where he believed that his results demonstrated the impossibility of hyperbolic geometry. His claim seems to have been based on Euclidean presuppositions, because no logical contradiction was present. In this attempt to prove Euclidean geometry he instead unintentionally discovered a new viable geometry, but did not realize it.
In 1766 Johann Lambert wrote, but did not publish, Theorie der Parallellinien in which he attempted, as Saccheri did, to prove the fifth postulate. He worked with a figure now known as a Lambert quadrilateral, a quadrilateral with three right angles (can be considered half of a Saccheri quadrilateral). He quickly eliminated the possibility that the fourth angle is obtuse, as had Saccheri and Khayyam, and then proceeded to prove many theorems under the assumption of an acute angle. Unlike Saccheri, he never felt that he had reached a contradiction with this assumption. He had proved the non-Euclidean result that the sum of the angles in a triangle increases as the area of the triangle decreases, and this led him to speculate on the possibility of a model of the acute case on a sphere of imaginary radius. He did not carry this idea any further.
At this time it was widely believed that the universe worked according to the principles of Euclidean geometry.
Development of non-Euclidean geometry
The beginning of the 19th century would finally witness decisive steps in the creation of non-Euclidean geometry.
Circa 1813, Carl Friedrich Gauss and independently around 1818, the German professor of law Ferdinand Karl Schweikart had the germinal ideas of non-Euclidean geometry worked out, but neither published any results. Schweikart's nephew Franz Taurinus did publish important results of hyperbolic trigonometry in two papers in 1825 and 1826, yet while admitting the internal consistency of hyperbolic geometry, he still believed in the special role of Euclidean geometry.
Then, in 1829–1830 the Russian mathematician Nikolai Ivanovich Lobachevsky and in 1832 the Hungarian mathematician János Bolyai separately and independently published treatises on hyperbolic geometry. Consequently, hyperbolic geometry is called Lobachevskian or Bolyai-Lobachevskian geometry, as both mathematicians, independent of each other, are the basic authors of non-Euclidean geometry. Gauss mentioned to Bolyai's father, when shown the younger Bolyai's work, that he had developed such a geometry several years before, though he did not publish. While Lobachevsky created a non-Euclidean geometry by negating the parallel postulate, Bolyai worked out a geometry where both the Euclidean and the hyperbolic geometry are possible depending on a parameter k. Bolyai ends his work by mentioning that it is not possible to decide through mathematical reasoning alone if the geometry of the physical universe is Euclidean or non-Euclidean; this is a task for the physical sciences.
Bernhard Riemann, in a famous lecture in 1854, founded the field of Riemannian geometry, discussing in particular the ideas now called manifolds, Riemannian metric, and curvature.
He constructed an infinite family of non-Euclidean geometries by giving a formula for a family of Riemannian metrics on the unit ball in Euclidean space. The simplest of these is called elliptic geometry and it is considered a non-Euclidean geometry due to its lack of parallel lines.
By formulating the geometry in terms of a curvature tensor, Riemann allowed non-Euclidean geometry to apply to higher dimensions. Beltrami (1868) was the first to apply Riemann's geometry to spaces of negative curvature.
Terminology
It was Gauss who coined the term "non-Euclidean geometry". He was referring to his own work, which today we call hyperbolic geometry or Lobachevskian geometry. Several modern authors still use the generic term non-Euclidean geometry to mean hyperbolic geometry.
Arthur Cayley noted that distance between points inside a conic could be defined in terms of logarithm and the projective cross-ratio function. The method has become called the Cayley–Klein metric because Felix Klein exploited it to describe the non-Euclidean geometries in articles in 1871 and 1873 and later in book form. The Cayley–Klein metrics provided working models of hyperbolic and elliptic metric geometries, as well as Euclidean geometry.
Klein is responsible for the terms "hyperbolic" and "elliptic" (in his system he called Euclidean geometry parabolic, a term that generally fell out of use). His influence has led to the current usage of the term "non-Euclidean geometry" to mean either "hyperbolic" or "elliptic" geometry.
There are some mathematicians who would extend the list of geometries that should be called "non-Euclidean" in various ways.
There are many kinds of geometry that are quite different from Euclidean geometry but are also not necessarily included in the conventional meaning of "non-Euclidean geometry", such as more general instances of Riemannian geometry.
Axiomatic basis of non-Euclidean geometry
Euclidean geometry can be axiomatically described in several ways. However, Euclid's original system of five postulates (axioms) is not one of these, as his proofs relied on several unstated assumptions that should also have been taken as axioms. Hilbert's system consisting of 20 axioms most closely follows the approach of Euclid and provides the justification for all of Euclid's proofs. Other systems, using different sets of undefined terms obtain the same geometry by different paths. All approaches, however, have an axiom that is logically equivalent to Euclid's fifth postulate, the parallel postulate. Hilbert uses the Playfair axiom form, while Birkhoff, for instance, uses the axiom that says that, "There exists a pair of similar but not congruent triangles." In any of these systems, removal of the one axiom equivalent to the parallel postulate, in whatever form it takes, and leaving all the other axioms intact, produces absolute geometry. As the first 28 propositions of Euclid (in The Elements) do not require the use of the parallel postulate or anything equivalent to it, they are all true statements in absolute geometry.
To obtain a non-Euclidean geometry, the parallel postulate (or its equivalent) must be replaced by its negation. Negating the Playfair's axiom form, since it is a compound statement (... there exists one and only one ...), can be done in two ways:
Either there will exist more than one line through the point parallel to the given line or there will exist no lines through the point parallel to the given line. In the first case, replacing the parallel postulate (or its equivalent) with the statement "In a plane, given a point P and a line not passing through P, there exist two lines through P, which do not meet " and keeping all the other axioms, yields hyperbolic geometry.
The second case is not dealt with as easily. Simply replacing the parallel postulate with the statement, "In a plane, given a point P and a line not passing through P, all the lines through P meet ", does not give a consistent set of axioms. This follows since parallel lines exist in absolute geometry, but this statement says that there are no parallel lines. This problem was known (in a different guise) to Khayyam, Saccheri and Lambert and was the basis for their rejecting what was known as the "obtuse angle case". To obtain a consistent set of axioms that includes this axiom about having no parallel lines, some other axioms must be tweaked. These adjustments depend upon the axiom system used. Among others, these tweaks have the effect of modifying Euclid's second postulate from the statement that line segments can be extended indefinitely to the statement that lines are unbounded. Riemann's elliptic geometry emerges as the most natural geometry satisfying this axiom.
Models
Models of non-Euclidean geometry are mathematical models of geometries which are non-Euclidean in the sense that it is not the case that exactly one line can be drawn parallel to a given line l through a point that is not on l. In hyperbolic geometric models, by contrast, there are infinitely many lines through A parallel to l, and in elliptic geometric models, parallel lines do not exist. (See the entries on hyperbolic geometry and elliptic geometry for more information.)
Euclidean geometry is modelled by our notion of a "flat plane."
The simplest model for elliptic geometry is a sphere, where lines are "great circles" (such as the equator or the meridians on a globe), and points opposite each other are identified (considered to be the same).
The pseudosphere has the appropriate curvature to model hyperbolic geometry.
Elliptic geometry
The simplest model for elliptic geometry is a sphere, where lines are "great circles" (such as the equator or the meridians on a globe), and points opposite each other (called antipodal points) are identified (considered the same). This is also one of the standard models of the real projective plane. The difference is that as a model of elliptic geometry a metric is introduced permitting the measurement of lengths and angles, while as a model of the projective plane there is no such metric.
In the elliptic model, for any given line and a point A, which is not on , all lines through A will intersect .
Hyperbolic geometry
Even after the work of Lobachevsky, Gauss, and Bolyai, the question remained: "Does such a model exist for hyperbolic geometry?". The model for hyperbolic geometry was answered by Eugenio Beltrami, in 1868, who first showed that a surface called the pseudosphere has the appropriate curvature to model a portion of hyperbolic space and in a second paper in the same year, defined the Klein model, which models the entirety of hyperbolic space, and used this to show that Euclidean geometry and hyperbolic geometry were equiconsistent so that hyperbolic geometry was logically consistent if and only if Euclidean geometry was. (The reverse implication follows from the horosphere model of Euclidean geometry.)
In the hyperbolic model, within a two-dimensional plane, for any given line and a point A, which is not on , there are infinitely many lines through A that do not intersect .
In these models, the concepts of non-Euclidean geometries are represented by Euclidean objects in a Euclidean setting. This introduces a perceptual distortion wherein the straight lines of the non-Euclidean geometry are represented by Euclidean curves that visually bend. This "bending" is not a property of the non-Euclidean lines, only an artifice of the way they are represented.
Three-dimensional non-Euclidean geometry
In three dimensions, there are eight models of geometries. There are Euclidean, elliptic, and hyperbolic geometries, as in the two-dimensional case; mixed geometries that are partially Euclidean and partially hyperbolic or spherical; twisted versions of the mixed geometries; and one unusual geometry that is completely anisotropic (i.e. every direction behaves differently).
Uncommon properties
Euclidean and non-Euclidean geometries naturally have many similar properties, namely those that do not depend upon the nature of parallelism. This commonality is the subject of absolute geometry (also called neutral geometry). However, the properties that distinguish one geometry from others have historically received the most attention.
Besides the behavior of lines with respect to a common perpendicular, mentioned in the introduction, we also have the following:
A Lambert quadrilateral is a quadrilateral with three right angles. The fourth angle of a Lambert quadrilateral is acute if the geometry is hyperbolic, a right angle if the geometry is Euclidean or obtuse if the geometry is elliptic. Consequently, rectangles exist (a statement equivalent to the parallel postulate) only in Euclidean geometry.
A Saccheri quadrilateral is a quadrilateral with two sides of equal length, both perpendicular to a side called the base. The other two angles of a Saccheri quadrilateral are called the summit angles and they have equal measure. The summit angles of a Saccheri quadrilateral are acute if the geometry is hyperbolic, right angles if the geometry is Euclidean and obtuse angles if the geometry is elliptic.
The sum of the measures of the angles of any triangle is less than 180° if the geometry is hyperbolic, equal to 180° if the geometry is Euclidean, and greater than 180° if the geometry is elliptic. The defect of a triangle is the numerical value (180° − sum of the measures of the angles of the triangle). This result may also be stated as: the defect of triangles in hyperbolic geometry is positive, the defect of triangles in Euclidean geometry is zero, and the defect of triangles in elliptic geometry is negative.
Importance
Before the models of a non-Euclidean plane were presented by Beltrami, Klein, and Poincaré, Euclidean geometry stood unchallenged as the mathematical model of space. Furthermore, since the substance of the subject in synthetic geometry was a chief exhibit of rationality, the Euclidean point of view represented absolute authority.
The discovery of the non-Euclidean geometries had a ripple effect which went far beyond the boundaries of mathematics and science. The philosopher Immanuel Kant's treatment of human knowledge had a special role for geometry. It was his prime example of synthetic a priori knowledge; not derived from the senses nor deduced through logic — our knowledge of space was a truth that we were born with. Unfortunately for Kant, his concept of this unalterably true geometry was Euclidean. Theology was also affected by the change from absolute truth to relative truth in the way that mathematics is related to the world around it, that was a result of this paradigm shift.
Non-Euclidean geometry is an example of a scientific revolution in the history of science, in which mathematicians and scientists changed the way they viewed their subjects. Some geometers called Lobachevsky the "Copernicus of Geometry" due to the revolutionary character of his work.
The existence of non-Euclidean geometries impacted the intellectual life of Victorian England in many ways and in particular was one of the leading factors that caused a re-examination of the teaching of geometry based on Euclid's Elements. This curriculum issue was hotly debated at the time and was even the subject of a book, Euclid and his Modern Rivals, written by Charles Lutwidge Dodgson (1832–1898) better known as Lewis Carroll, the author of Alice in Wonderland.
Planar algebras
In analytic geometry a plane is described with Cartesian coordinates:
The points are sometimes identified with generalized complex numbers where ε2 ∈ { –1, 0, 1}.
The Euclidean plane corresponds to the case , an imaginary unit. Since the modulus of is given by
this quantity is the square of the Euclidean distance between and the origin.
For instance, is the unit circle.
For planar algebra, non-Euclidean geometry arises in the other cases.
When , a hyperbolic unit. Then is a split-complex number and conventionally replaces epsilon. Then
and is the unit hyperbola.
When , then is a dual number.
This approach to non-Euclidean geometry explains the non-Euclidean angles: the parameters of slope in the dual number plane and hyperbolic angle in the split-complex plane correspond to angle in Euclidean geometry. Indeed, they each arise in polar decomposition of a complex number .
Kinematic geometries
Hyperbolic geometry found an application in kinematics with the physical cosmology introduced by Hermann Minkowski in 1908. Minkowski introduced terms like worldline and proper time into mathematical physics. He realized that the submanifold, of events one moment of proper time into the future, could be considered a hyperbolic space of three dimensions.
Already in the 1890s Alexander Macfarlane was charting this submanifold through his Algebra of Physics and hyperbolic quaternions, though Macfarlane did not use cosmological language as Minkowski did in 1908. The relevant structure is now called the hyperboloid model of hyperbolic geometry.
The non-Euclidean planar algebras support kinematic geometries in the plane. For instance, the split-complex number z = eaj can represent a spacetime event one moment into the future of a frame of reference of rapidity a. Furthermore, multiplication by z amounts to a Lorentz boost mapping the frame with rapidity zero to that with rapidity a.
Kinematic study makes use of the dual numbers to represent the classical description of motion in absolute time and space:
The equations are equivalent to a shear mapping in linear algebra:
With dual numbers the mapping is
Another view of special relativity as a non-Euclidean geometry was advanced by E. B. Wilson and Gilbert Lewis in Proceedings of the American Academy of Arts and Sciences in 1912. They revamped the analytic geometry implicit in the split-complex number algebra into synthetic geometry of premises and deductions.
Fiction
Non-Euclidean geometry often makes appearances in works of science fiction and fantasy.
In 1895, H. G. Wells published the short story "The Remarkable Case of Davidson's Eyes". To appreciate this story one should know how antipodal points on a sphere are identified in a model of the elliptic plane. In the story, in the midst of a thunderstorm, Sidney Davidson sees "Waves and a remarkably neat schooner" while working in an electrical laboratory at Harlow Technical College. At the story's close, Davidson proves to have witnessed H.M.S. Fulmar off Antipodes Island.
Non-Euclidean geometry is sometimes connected with the influence of the 20th-century horror fiction writer H. P. Lovecraft. In his works, many unnatural things follow their own unique laws of geometry: in Lovecraft's Cthulhu Mythos, the sunken city of R'lyeh is characterized by its non-Euclidean geometry. It is heavily implied this is achieved as a side effect of not following the natural laws of this universe rather than simply using an alternate geometric model, as the sheer innate wrongness of it is said to be capable of driving those who look upon it insane.
The main character in Robert Pirsig's Zen and the Art of Motorcycle Maintenance mentioned Riemannian geometry on multiple occasions.
In The Brothers Karamazov, Dostoevsky discusses non-Euclidean geometry through his character Ivan.
Christopher Priest's novel Inverted World describes the struggle of living on a planet with the form of a rotating pseudosphere.
Robert Heinlein's The Number of the Beast utilizes non-Euclidean geometry to explain instantaneous transport through space and time and between parallel and fictional universes.
Zeno Rogue's HyperRogue is a roguelike game set on the hyperbolic plane, allowing the player to experience many properties of this geometry. Many mechanics, quests, and locations are strongly dependent on the features of hyperbolic geometry.
In the Renegade Legion science fiction setting for FASA's wargame, role-playing-game and fiction, faster-than-light travel and communications is possible through the use of Hsieh Ho's Polydimensional Non-Euclidean Geometry, published sometime in the middle of the 22nd century.
In Ian Stewart's Flatterland the protagonist Victoria Line visits all kinds of non-Euclidean worlds.
| Mathematics | Geometry | null |
58637 | https://en.wikipedia.org/wiki/Erysipelas | Erysipelas | Erysipelas () is a relatively common bacterial infection of the superficial layer of the skin (upper dermis), extending to the superficial lymphatic vessels within the skin, characterized by a raised, well-defined, tender, bright red rash, typically on the face or legs, but which can occur anywhere on the skin. It is a form of cellulitis and is potentially serious.
Erysipelas is usually caused by the bacterium Streptococcus pyogenes, also known as group A β-hemolytic streptococci, which enters the body through a break in the skin, such as a scratch or an insect bite. It is more superficial than cellulitis and is typically more raised and demarcated. The term comes from the Greek ἐρυσίπελας (erysípelas), meaning red skin.
In animals erysipelas is a disease caused by infection with the bacterium Erysipelothrix rhusiopathiae. In animals it is called Diamond Skin Disease and occurs especially in pigs. Heart valves and skin are affected. Erysipelothrix rhusiopathiae can also infect humans but in that case the infection is known as erysipeloid and is an occupational skin disease.
Signs and symptoms
Symptoms often occur suddenly. Affected individuals may develop a fever, shivering, chills, fatigue, headaches and vomiting and be generally unwell within 48 hours of the initial infection. The red plaque enlarges rapidly and has a sharply demarcated, raised edge. It may appear swollen, feel firm, warm and tender to touch and have a consistency similar to orange peel. Pain may be extreme.
More severe infections can result in vesicles (pox or insect bite-like marks), blisters, and petechiae (small purple or red spots), with possible skin necrosis (death). Lymph nodes may be swollen and lymphedema may occur. Occasionally a red streak extending to the lymph node can be seen.
The infection may occur on any part of the skin, including the face, arms, fingers, legs and toes; it tends to favour the extremities. The umbilical stump and sites of lymphoedema are also common sites affected.
Fat tissue and facial areas, typically around the eyes, ears and cheeks, are most susceptible to infection. Repeated infection of the extremities can lead to chronic swelling (lymphoedema).
Cause
Most cases of erysipelas are due to Streptococcus pyogenes, also known as group A β-hemolytic streptococci, less commonly to group C or G streptococci and rarely to Staphylococcus aureus. Newborns may contract erysipelas due to Streptococcus agalactiae, also known as group B streptococcus or GBS.
The infecting bacteria can enter the skin through minor trauma, human, insect or animal bites, surgical incisions, ulcers, burns and abrasions. There may be underlying eczema or athlete's foot (tinea pedis), and it can originate from streptococci bacteria in the subject's own nasal passages or ear.
The rash is due to an exotoxin, not the Streptococcus bacteria, and is found in areas where no symptoms are present, e.g. the infection may be in the nasopharynx, but the rash is found usually on the epidermis and superficial lymphatics.
Diagnosis
Erysipelas is usually diagnosed by the clinician looking at the characteristic well-demarcated rash following a history of injury or recognition of one of the risk factors.
Tests, if performed, may show a high white cell count, raised CRP or positive blood culture identifying the organism. Skin cultures are often negative.
Erysipelas must be differentiated from herpes zoster, angioedema, contact dermatitis, erythema chronicum migrans of early Lyme disease, gout, septic arthritis, septic bursitis, vasculitis, allergic reaction to an insect bite, acute drug reaction, deep vein thrombosis and diffuse inflammatory carcinoma of the breast.
Differentiating from cellulitis
Erysipelas can be distinguished from cellulitis by two particular features: its raised advancing edge and its sharp borders. The redness in cellulitis is not raised and its border is relatively indistinct. Bright redness of erysipelas has been described as a third differentiating feature.
Erysipelas does not affect subcutaneous tissue. It does not release pus, only serum or serous fluid. Subcutaneous edema may lead the physician to misdiagnose it as cellulitis.
Treatment
Treatment is with antibiotics; (amoxicillin/clavulanic acid, cefalexin, or cloxacillin) taken by mouth for five days; though sometimes longer.
Because of the risk of reinfection, prophylactic antibiotics are sometimes used after resolution of the initial condition.
Prognosis
The disease prognosis includes:
Spread of infection to other areas of body can occur through the bloodstream (bacteremia), including septic arthritis. Glomerulonephritis can follow an episode of streptococcal erysipelas or other skin infection but not rheumatic fever.
of infection: Erysipelas can recur in 18–30% of cases even after antibiotic treatment. A chronic state of recurrent erysipelas infections can occur with several predisposing factors, including alcoholism, diabetes and athlete's foot. Another predisposing factor is chronic cutaneous edema, such as can in turn be caused by venous insufficiency or heart failure.
Lymphatic damage
Necrotizing fasciitis, commonly known as ‘flesh-eating’ bacterial infection, is a potentially deadly exacerbation of the infection if it spreads to deeper tissue.
Epidemiology
There is currently no validated recent data on the worldwide incidence of erysipelas. From 2004 to 2005 UK hospitals reported 69,576 cases of cellulitis and 516 cases of erysipelas. One book stated that several studies have placed the prevalence rate between one and 250 in every 10,000 people. The development of antibiotics, as well as increased sanitation standards, has contributed to the decreased rate of incidence. Erysipelas caused systemic illness in up to 40% of cases reported by UK hospitals and 29% of people had recurrent episodes within three years. Anyone can be infected, although incidence rates are higher in infants and elderly. Several studies also reported a higher incidence rate in women. Four out of five cases occur on the legs, although historically the face was a more frequent site.
Risk factors for developing the disease include
Arteriovenous fistula
Chronic skin conditions such as psoriasis, athlete's foot, and eczema
Excising the saphenous vein
Immune deficiency or compromise, such as
Diabetes
Alcoholism
Obesity
Human immunodeficiency virus (HIV)
In newborns, exposure of the umbilical cord and vaccination site injury
Issues in lymph or blood circulation
Leg ulcers
Lymphatic edema
Lymphatic obstruction
Lymphoedema
Nasopharyngeal infection
Nephrotic syndrome
Pregnancy
Previous episode(s) of erysipelas
Toe web intertrigo
Traumatic wounds
Venous insufficiency or disease
Preventive measures
Individuals can take preventive steps to decrease their risk of catching the disease. Properly cleaning and covering wounds is important for people with an open wound. Effectively treating athlete's foot or eczema if they were the cause of the initial infection will decrease the chance of the infection occurring again. People with diabetes should pay attention to maintaining good foot hygiene. It is also important to follow up with doctors to make sure the disease has not come back or spread. About one third of people who have had erysipelas will be infected again within three years. Rigorous antibiotics may be needed in the case of recurrent bacterial skin infections.
Notable cases
In Rodrigo Souza Leão's autobiographical novel All Dogs are Blue, he says that his erysipelas is cured by the antibiotic Benzetacil (Benzathine benzylpenicillin).
History
It was historically known as St Anthony's fire, with past treatments including muriated tincture of iron, a solution of Iron(III) chloride in alcohol.
Citations
External links
Bacterial diseases
Bacterium-related cutaneous conditions | Biology and health sciences | Bacterial infections | Health |
58646 | https://en.wikipedia.org/wiki/Strait | Strait | A strait is a water body connecting two seas or two water basins. While the landform generally constricts the flow, the surface water still flows, for the most part, at the same elevation on both sides and through the strait in both directions. In some straits there may be a dominant directional current through the strait. Most commonly, it is a narrowing channel that lies between two land masses. Some straits are not navigable, for example because they are either too narrow or too shallow, or because of an unnavigable reef or archipelago. Straits are also known to be loci for sediment accumulation. Usually, sand-size deposits occur on both the two opposite strait exits, forming subaqueous fans or deltas.
Terminology
The terms channel, pass, or passage can be synonymous and used interchangeably with strait, although each is sometimes differentiated with varying senses. In Scotland, firth or Kyle are also sometimes used as synonyms for strait.
Many straits are economically important. Straits can be important shipping routes and wars have been fought for control of them.
Numerous artificial channels, called canals, have been constructed to connect two oceans or seas over land, such as the Suez Canal. Although rivers and canals often provide passage between two large lakes, and these seem to suit the formal definition of strait, they are not usually referred to as such. Rivers and often canals, generally have a directional flow tied to changes in elevation, whereas straits often are free flowing in either direction or switch direction, maintaining the same elevation. The term strait is typically reserved for much larger, wider features of the marine environment. There are exceptions, with straits being called canals; Pearse Canal, for example.
Comparisons
Straits are the converse of isthmuses. That is, while a strait lies between two land masses and connects two large areas of ocean, an isthmus lies between two areas of ocean and connects two large land masses.
Some straits have the potential to generate significant tidal power using tidal stream turbines. Tides are more predictable than wave power or wind power. The Pentland Firth (a strait) may be capable of generating 10 GW. Cook Strait in New Zealand may be capable of generating 5.6 GW even though the total energy available in the flow is 15 GW.
Navigational (legal) regime
Straits used for international navigation through the territorial sea between one part of the high seas or an exclusive economic zone and another part of the high seas or an exclusive economic zone are subject to the legal regime of transit passage (Strait of Gibraltar, Dover Strait, Strait of Hormuz). The regime of innocent passage applies in straits used for international navigation (1) that connect a part of high seas or an exclusive economic zone with the territorial sea of a coastal nation (Straits of Tiran, Strait of Juan de Fuca, Strait of Baltiysk) and (2) in straits formed by an island of a state bordering the strait and its mainland if there exists seaward of the island a route through the high seas or through an exclusive economic zone of similar convenience with respect to navigational and hydrographical characteristics (Strait of Messina, Pentland Firth). There may be no suspension of innocent passage through such straits.
| Physical sciences | Oceanography | Earth science |
58664 | https://en.wikipedia.org/wiki/Gas%20turbine | Gas turbine | A gas turbine or gas turbine engine is a type of continuous flow internal combustion engine. The main parts common to all gas turbine engines form the power-producing part (known as the gas generator or core) and are, in the direction of flow:
a rotating gas compressor
a combustor
a compressor-driving turbine.
Additional components have to be added to the gas generator to suit its application. Common to all is an air inlet but with different configurations to suit the requirements of marine use, land use or flight at speeds varying from stationary to supersonic. A propelling nozzle is added to produce thrust for flight. An extra turbine is added to drive a propeller (turboprop) or ducted fan (turbofan) to reduce fuel consumption (by increasing propulsive efficiency) at subsonic flight speeds. An extra turbine is also required to drive a helicopter rotor or land-vehicle transmission (turboshaft), marine propeller or electrical generator (power turbine). Greater thrust-to-weight ratio for flight is achieved with the addition of an afterburner.
The basic operation of the gas turbine is a Brayton cycle with air as the working fluid: atmospheric air flows through the compressor that brings it to higher pressure; energy is then added by spraying fuel into the air and igniting it so that the combustion generates a high-temperature flow; this high-temperature pressurized gas enters a turbine, producing a shaft work output in the process, used to drive the compressor; the unused energy comes out in the exhaust gases that can be repurposed for external work, such as directly producing thrust in a turbojet engine, or rotating a second, independent turbine (known as a power turbine) that can be connected to a fan, propeller, or electrical generator. The purpose of the gas turbine determines the design so that the most desirable split of energy between the thrust and the shaft work is achieved. The fourth step of the Brayton cycle (cooling of the working fluid) is omitted, as gas turbines are open systems that do not reuse the same air.
Gas turbines are used to power aircraft, trains, ships, electrical generators, pumps, gas compressors, and tanks.
Timeline of development
50: Earliest records of Hero's engine (aeolipile). It most likely served no practical purpose, and was rather more of a curiosity; nonetheless, it demonstrated an important principle of physics that all modern turbine engines rely on.
1000: The "Trotting Horse Lamp" (, zŏumădēng) was used by the Chinese at lantern fairs as early as the Northern Song dynasty. When the lamp is lit, the heated airflow rises and drives an impeller with horse-riding figures attached on it, whose shadows are then projected onto the outer screen of the lantern.
1500: The Smoke jack was drawn by Leonardo da Vinci: Hot air from a fire rises through a single-stage axial turbine rotor mounted in the exhaust duct of the fireplace and turns the roasting spit by gear-chain connection.
1791: A patent was given to John Barber, an Englishman, for the first true gas turbine. His invention had most of the elements present in the modern day gas turbines. The turbine was designed to power a horseless carriage.
1894: Sir Charles Parsons patented the idea of propelling a ship with a steam turbine, and built a demonstration vessel, the Turbinia, easily the fastest vessel afloat at the time.
1899: Charles Gordon Curtis patented the first gas turbine engine in the US.
1900: Sanford Alexander Moss submitted a thesis on gas turbines. In 1903, Moss became an engineer for General Electric's Steam Turbine Department in Lynn, Massachusetts. While there, he applied some of his concepts in the development of the turbocharger.
1903: A Norwegian, Ægidius Elling, built the first gas turbine that was able to produce more power than needed to run its own components, which was considered an achievement in a time when knowledge about aerodynamics was limited. Using rotary compressors and turbines it produced .
1904: A gas turbine engine designed by Franz Stolze, based on his earlier 1873 patent application, is built and tested in Berlin. The Stolze gas turbine was too inefficient to sustain its own operation.
1906: The Armengaud-Lemale gas turbine tested in France. This was a relatively large machine which included a 25-stage centrifugal compressor designed by Auguste Rateau and built by the Brown Boveri Company. The gas turbine could sustain its own air compression but was too inefficient to produce useful work.
1910: The first operational Holzwarth gas turbine (pulse combustion) achieves an output of . Planned output of the machine was and its efficiency is below that of contemporary reciprocating engines.
1920s The practical theory of gas flow through passages was developed into the more formal (and applicable to turbines) theory of gas flow past airfoils by A. A. Griffith resulting in the publishing in 1926 of An Aerodynamic Theory of Turbine Design. Working testbed designs of axial turbines suitable for driving a propeller were developed by the Royal Aeronautical Establishment.
1930: Having found no interest from the RAF for his idea, Frank Whittle patented the design for a centrifugal gas turbine for jet propulsion. The first successful test run of his engine occurred in England in April 1937.
1932: The Brown Boveri Company of Switzerland starts selling axial compressor and turbine turbosets as part of the turbocharged steam generating Velox boiler. Following the gas turbine principle, the steam evaporation tubes are arranged within the gas turbine combustion chamber; the first Velox plant is erected at a French Steel mill in Mondeville, Calvados.
1936: The first constant flow industrial gas turbine is commissioned by the Brown Boveri Company and goes into service at Sun Oil's Marcus Hook refinery in Pennsylvania, US.
1937: Working proof-of-concept prototype turbojet engine runs in UK (Frank Whittle's) and Germany (Hans von Ohain's Heinkel HeS 1). Henry Tizard secures UK government funding for further development of Power Jets engine.
1939: The First 4 MW utility power generation gas turbine is built by the Brown Boveri Company for an emergency power station in Neuchâtel, Switzerland. The turbojet powered Heinkel He 178, the world's first jet aircraft, makes its first flight.
1940: Jendrassik Cs-1, a turboprop engine, made its first bench run. The Cs-1 was designed by Hungarian engineer György Jendrassik, and was intended to power a Hungarian twin-engine heavy fighter, the RMI-1. Work on the Cs-1 stopped in 1941 without the type having powered any aircraft.
1944: The Junkers Jumo 004 engine enters full production, powering the first German military jets such as the Messerschmitt Me 262. This marks the beginning of the reign of gas turbines in the sky.
1946: National Gas Turbine Establishment formed from Power Jets and the RAE turbine division to bring together Whittle and Hayne Constant's work. In Beznau, Switzerland the first commercial reheated/recuperated unit generating 27 MW was commissioned.
1947: A Metropolitan Vickers G1 (Gatric) becomes the first marine gas turbine when it completes sea trials on the Royal Navy's M.G.B 2009 vessel. The Gatric was an aeroderivative gas turbine based on the Metropolitan Vickers F2 jet engine.
1995: Siemens becomes the first manufacturer of large electricity producing gas turbines to incorporate single crystal turbine blade technology into their production models, allowing higher operating temperatures and greater efficiency.
2011 Mitsubishi Heavy Industries tests the first >60% efficiency combined cycle gas turbine (the M501J) at its Takasago, Hyōgo, works.
Theory of operation
In an ideal gas turbine, gases undergo four thermodynamic processes: an isentropic compression, an isobaric (constant pressure) combustion, an isentropic expansion and isobaric heat rejection. Together, these make up the Brayton cycle, also known as the "constant pressure cycle". It is distinguished from the Otto cycle, in that all the processes (compression, ignition combustion, exhaust), occur at the same time, continuously.
In a real gas turbine, mechanical energy is changed irreversibly (due to internal friction and turbulence) into pressure and thermal energy when the gas is compressed (in either a centrifugal or axial compressor). Heat is added in the combustion chamber and the specific volume of the gas increases, accompanied by a slight loss in pressure. During expansion through the stator and rotor passages in the turbine, irreversible energy transformation once again occurs. Fresh air is taken in, in place of the heat rejection.
Air is taken in by a compressor, called a gas generator, with either an axial or centrifugal design, or a combination of the two. This air is then ducted into the combustor section which can be of a annular, can, or can-annular design. In the combustor section, roughly 70% of the air from the compressor is ducted around the combustor itself for cooling purposes. The remaining roughly 30% the air is mixed with fuel and ignited by the already burning air-fuel mixture, which then expands producing power across the turbine. This expansion of the mixture then leaves the combustor section and has its velocity increased across the turbine section to strike the turbine blades, spinning the disc they are attached to, thus creating useful power. Of the power produced, 60-70% is solely used to power the gas generator. The remaining power is used to power what the engine is being used for, typically an aviation application, being thrust in a turbojet, driving the fan of a turbofan, rotor or accessory of a turboshaft, and gear reduction and propeller of a turboprop.
If the engine has a power turbine added to drive an industrial generator or a helicopter rotor, the exit pressure will be as close to the entry pressure as possible with only enough energy left to overcome the pressure losses in the exhaust ducting and expel the exhaust. For a turboprop engine there will be a particular balance between propeller power and jet thrust which gives the most economical operation. In a turbojet engine only enough pressure and energy is extracted from the flow to drive the compressor and other components. The remaining high-pressure gases are accelerated through a nozzle to provide a jet to propel an aircraft.
The smaller the engine, the higher the rotation rate of the shaft must be to attain the required blade tip speed. Blade-tip speed determines the maximum pressure ratios that can be obtained by the turbine and the compressor. This, in turn, limits the maximum power and efficiency that can be obtained by the engine. In order for tip speed to remain constant, if the diameter of a rotor is reduced by half, the rotational speed must double. For example, large jet engines operate around 10,000–25,000 rpm, while micro turbines spin as fast as 500,000 rpm.
Mechanically, gas turbines can be considerably less complex than Reciprocating engines. Simple turbines might have one main moving part, the compressor/shaft/turbine rotor assembly, with other moving parts in the fuel system. This, in turn, can translate into price. For instance, costing for materials, the Jumo 004 proved cheaper than the Junkers 213 piston engine, which was , and needed only 375 hours of lower-skill labor to complete (including manufacture, assembly, and shipping), compared to 1,400 for the BMW 801. This, however, also translated into poor efficiency and reliability. More advanced gas turbines (such as those found in modern jet engines or combined cycle power plants) may have 2 or 3 shafts (spools), hundreds of compressor and turbine blades, movable stator blades, and extensive external tubing for fuel, oil and air systems; they use temperature resistant alloys, and are made with tight specifications requiring precision manufacture. All this often makes the construction of a simple gas turbine more complicated than a piston engine.
Moreover, to reach optimum performance in modern gas turbine power plants the gas needs to be prepared to exact fuel specifications. Fuel gas conditioning systems treat the natural gas to reach the exact fuel specification prior to entering the turbine in terms of pressure, temperature, gas composition, and the related Wobbe index.
The primary advantage of a gas turbine engine is its power to weight ratio.
Since significant useful work can be generated by a relatively lightweight engine, gas turbines are perfectly suited for aircraft propulsion.
Thrust bearings and journal bearings are a critical part of a design. They are hydrodynamic oil bearings or oil-cooled rolling-element bearings. Foil bearings are used in some small machines such as micro turbines and also have strong potential for use in small gas turbines/auxiliary power units
Creep
A major challenge facing turbine design, especially turbine blades, is reducing the creep that is induced by the high temperatures and stresses that are experienced during operation. Higher operating temperatures are continuously sought in order to increase efficiency, but come at the cost of higher creep rates. Several methods have therefore been employed in an attempt to achieve optimal performance while limiting creep, with the most successful ones being high performance coatings and single crystal superalloys. These technologies work by limiting deformation that occurs by mechanisms that can be broadly classified as dislocation glide, dislocation climb and diffusional flow.
Protective coatings provide thermal insulation of the blade and offer oxidation and corrosion resistance. Thermal barrier coatings (TBCs) are often stabilized zirconium dioxide-based ceramics and oxidation/corrosion resistant coatings (bond coats) typically consist of aluminides or MCrAlY (where M is typically Fe and/or Cr) alloys. Using TBCs limits the temperature exposure of the superalloy substrate, thereby decreasing the diffusivity of the active species (typically vacancies) within the alloy and reducing dislocation and vacancy creep. It has been found that a coating of 1–200 μm can decrease blade temperatures by up to .
Bond coats are directly applied onto the surface of the substrate using pack carburization and serve the dual purpose of providing improved adherence for the TBC and oxidation resistance for the substrate. The Al from the bond coats forms Al2O3 on the TBC-bond coat interface which provides the oxidation resistance, but also results in the formation of an undesirable interdiffusion (ID) zone between itself and the substrate. The oxidation resistance outweighs the drawbacks associated with the ID zone as it increases the lifetime of the blade and limits the efficiency losses caused by a buildup on the outside of the blades.
Nickel-based superalloys boast improved strength and creep resistance due to their composition and resultant microstructure. The gamma (γ) FCC nickel is alloyed with aluminum and titanium in order to precipitate a uniform dispersion of the coherent gamma-prime (γ') phases. The finely dispersed γ' precipitates impede dislocation motion and introduce a threshold stress, increasing the stress required for the onset of creep. Furthermore, γ' is an ordered L12 phase that makes it harder for dislocations to shear past it. Further Refractory elements such as rhenium and ruthenium can be added in solid solution to improve creep strength. The addition of these elements reduces the diffusion of the gamma prime phase, thus preserving the fatigue resistance, strength, and creep resistance. The development of single crystal superalloys has led to significant improvements in creep resistance as well. Due to the lack of grain boundaries, single crystals eliminate Coble creep and consequently deform by fewer modes – decreasing the creep rate. Although single crystals have lower creep at high temperatures, they have significantly lower yield stresses at room temperature where strength is determined by the Hall-Petch relationship. Care needs to be taken in order to optimize the design parameters to limit high temperature creep while not decreasing low temperature yield strength.
Types
Jet engines
Airbreathing jet engines are gas turbines optimized to produce thrust from the exhaust gases, or from ducted fans connected to the gas turbines. Jet engines that produce thrust from the direct impulse of exhaust gases are often called turbojets. While still in service with many militaries and civilian operators, turbojets have mostly been phased out in favor of the turbofan engine due to the turbojet's low fuel efficiency, and high noise. Those that generate thrust with the addition of a ducted fan are called turbofans or (rarely) fan-jets. These engines produce nearly 80% of their thrust by the ducted fan, which can be seen from the front of the engine. They come in two types, low-bypass turbofan and high bypass, the difference being the amount of air moved by the fan, called "bypass air". These engines offer the benefit of more thrust without extra fuel consumption.
Gas turbines are also used in many liquid-fuel rockets, where gas turbines are used to power a turbopump to permit the use of lightweight, low-pressure tanks, reducing the empty weight of the rocket.
Turboprop engines
A turboprop engine is a turbine engine that drives an aircraft propeller using a reduction gear to translate high turbine section operating speed (often in the 10s of thousands) into low thousands necessary for efficient propeller operation. The benefit of using the turboprop engine is to take advantage of the turbine engines high power-to-weight ratio to drive a propeller, thus allowing a more powerful, but also smaller engine to be used. Turboprop engines are used on a wide range of business aircraft such as the Pilatus PC-12, commuter aircraft such as the Beechcraft 1900, and small cargo aircraft such as the Cessna 208 Caravan or De Havilland Canada Dash 8, and large aircraft (typically military) such as the Airbus A400M transport, Lockheed AC-130 and the 60-year-old Tupolev Tu-95 strategic bomber. While military turboprop engines can vary, in the civilian market there are two primary engines to be found: the Pratt & Whitney Canada PT6, a free-turbine turboshaft engine, and the Honeywell TPE331, a fixed turbine engine (formerly designated as the Garrett AiResearch 331).
Aeroderivative gas turbines
Aeroderivative gas turbines are generally based on existing aircraft gas turbine engines and are smaller and lighter than industrial gas turbines.
Aeroderivatives are used in electrical power generation due to their ability to be shut down and handle load changes more quickly than industrial machines. They are also used in the marine industry to reduce weight. Common types include the General Electric LM2500, General Electric LM6000, and aeroderivative versions of the Pratt & Whitney PW4000, Pratt & Whitney FT4 and Rolls-Royce RB211.
Amateur gas turbines
Increasing numbers of gas turbines are being used or even constructed by amateurs.
In its most straightforward form, these are commercial turbines acquired through military surplus or scrapyard sales, then operated for display as part of the hobby of engine collecting. In its most extreme form, amateurs have even rebuilt engines beyond professional repair and then used them to compete for the land speed record.
The simplest form of self-constructed gas turbine employs an automotive turbocharger as the core component. A combustion chamber is fabricated and plumbed between the compressor and turbine sections.
More sophisticated turbojets are also built, where their thrust and light weight are sufficient to power large model aircraft. The Schreckling design constructs the entire engine from raw materials, including the fabrication of a centrifugal compressor wheel from plywood, epoxy and wrapped carbon fibre strands.
Several small companies now manufacture small turbines and parts for the amateur. Most turbojet-powered model aircraft are now using these commercial and semi-commercial microturbines, rather than a Schreckling-like home-build.
Auxiliary power units
Small gas turbines are used as auxiliary power units (APUs) to supply auxiliary power to larger, mobile, machines such as an aircraft, and are a turboshaft design. They supply:
compressed air for air cycle machine style air conditioning and ventilation,
compressed air start-up power for larger jet engines,
mechanical (shaft) power to a gearbox to drive shafted accessories, and
electrical, hydraulic and other power-transmission sources to consuming devices remote from the APU.
Industrial gas turbines for power generation
Industrial gas turbines differ from aeronautical designs in that the frames, bearings, and blading are of heavier construction. They are also much more closely integrated with the devices they power—often an electric generator—and the secondary-energy equipment that is used to recover residual energy (largely heat).
They range in size from portable mobile plants to large, complex systems weighing more than a hundred tonnes housed in purpose-built buildings. When the gas turbine is used solely for shaft power, its thermal efficiency is about 30%. However, it may be cheaper to buy electricity than to generate it. Therefore, many engines are used in CHP (Combined Heat and Power) configurations that can be small enough to be integrated into portable container configurations.
Gas turbines can be particularly efficient when waste heat from the turbine is recovered by a heat recovery steam generator (HRSG) to power a conventional steam turbine in a combined cycle configuration. The 605 MW General Electric 9HA achieved a 62.22% efficiency rate with temperatures as high as .
For 2018, GE offers its 826 MW HA at over 64% efficiency in combined cycle due to advances in additive manufacturing and combustion breakthroughs, up from 63.7% in 2017 orders and on track to achieve 65% by the early 2020s.
In March 2018, GE Power achieved a 63.08% gross efficiency for its 7HA turbine.
Aeroderivative gas turbines can also be used in combined cycles, leading to a higher efficiency, but it will not be as high as a specifically designed industrial gas turbine. They can also be run in a cogeneration configuration: the exhaust is used for space or water heating, or drives an absorption chiller for cooling the inlet air and increase the power output, technology known as turbine inlet air cooling.
Another significant advantage is their ability to be turned on and off within minutes, supplying power during peak, or unscheduled, demand. Since single cycle (gas turbine only) power plants are less efficient than combined cycle plants, they are usually used as peaking power plants, which operate anywhere from several hours per day to a few dozen hours per year—depending on the electricity demand and the generating capacity of the region. In areas with a shortage of base-load and load following power plant capacity or with low fuel costs, a gas turbine powerplant may regularly operate most hours of the day. A large single-cycle gas turbine typically produces 100 to 400 megawatts of electric power and has 35–40% thermodynamic efficiency.
Industrial gas turbines for mechanical drive
Industrial gas turbines that are used solely for mechanical drive or used in collaboration with a recovery steam generator differ from power generating sets in that they are often smaller and feature a dual shaft design as opposed to a single shaft. The power range varies from 1 megawatt up to 50 megawatts. These engines are connected directly or via a gearbox to either a pump or compressor assembly. The majority of installations are used within the oil and gas industries. Mechanical drive applications increase efficiency by around 2%.
Oil and gas platforms require these engines to drive compressors to inject gas into the wells to force oil up via another bore, or to compress the gas for transportation. They are also often used to provide power for the platform. These platforms do not need to use the engine in collaboration with a CHP system due to getting the gas at an extremely reduced cost (often free from burn off gas). The same companies use pump sets to drive the fluids to land and across pipelines in various intervals.
Compressed air energy storage
One modern development seeks to improve efficiency in another way, by separating the compressor and the turbine with a compressed air store. In a conventional turbine, up to half the generated power is used driving the compressor. In a compressed air energy storage configuration, power is used to drive the compressor, and the compressed air is released to operate the turbine when required.
Turboshaft engines
Turboshaft engines are used to drive compressors in gas pumping stations and natural gas liquefaction plants. They are also used in aviation to power all but the smallest modern helicopters, and function as an auxiliary power unit in large commercial aircraft. A primary shaft carries the compressor and its turbine which, together with a combustor, is called a Gas Generator. A separately spinning power-turbine is usually used to drive the rotor on helicopters. Allowing the gas generator and power turbine/rotor to spin at their own speeds allows more flexibility in their design.
Radial gas turbines
Scale jet engines
Also known as miniature gas turbines or micro-jets.
With this in mind the pioneer of modern Micro-Jets, Kurt Schreckling, produced one of the world's first Micro-Turbines, the FD3/67. This engine can produce up to 22 newtons of thrust, and can be built by most mechanically minded people with basic engineering tools, such as a metal lathe.
Microturbines
Evolved from piston engine turbochargers, aircraft APUs or small jet engines, microturbines are 25 to 500 kilowatt turbines the size of a refrigerator.
Microturbines have around 15% efficiencies without a recuperator, 20 to 30% with one and they can reach 85% combined thermal-electrical efficiency in cogeneration.
External combustion
Most gas turbines are internal combustion engines but it is also possible to manufacture an external combustion gas turbine which is, effectively, a turbine version of a hot air engine.
Those systems are usually indicated as EFGT (Externally Fired Gas Turbine) or IFGT (Indirectly Fired Gas Turbine).
External combustion has been used for the purpose of using pulverized coal or finely ground biomass (such as sawdust) as a fuel. In the indirect system, a heat exchanger is used and only clean air with no combustion products travels through the power turbine. The thermal efficiency is lower in the indirect type of external combustion; however, the turbine blades are not subjected to combustion products and much lower quality (and therefore cheaper) fuels are able to be used.
When external combustion is used, it is possible to use exhaust air from the turbine as the primary combustion air. This effectively reduces global heat losses, although heat losses associated with the combustion exhaust remain inevitable.
Closed-cycle gas turbines based on helium or supercritical carbon dioxide also hold promise for use with future high temperature solar and nuclear power generation.
In surface vehicles
Gas turbines are often used on ships, locomotives, helicopters, tanks, and to a lesser extent, on cars, buses, and motorcycles.
A key advantage of jets and turboprops for airplane propulsion – their superior performance at high altitude compared to piston engines, particularly naturally aspirated ones – is irrelevant in most automobile applications. Their power-to-weight advantage, though less critical than for aircraft, is still important.
Gas turbines offer a high-powered engine in a very small and light package. However, they are not as responsive and efficient as small piston engines over the wide range of RPMs and powers needed in vehicle applications. In series hybrid vehicles, as the driving electric motors are mechanically detached from the electricity generating engine, the responsiveness, poor performance at low speed and low efficiency at low output problems are much less important. The turbine can be run at optimum speed for its power output, and batteries and ultracapacitors can supply power as needed, with the engine cycled on and off to run it only at high efficiency. The emergence of the continuously variable transmission may also alleviate the responsiveness problem.
Turbines have historically been more expensive to produce than piston engines, though this is partly because piston engines have been mass-produced in huge quantities for decades, while small gas turbine engines are rarities; however, turbines are mass-produced in the closely related form of the turbocharger.
The turbocharger is basically a compact and simple free shaft radial gas turbine which is driven by the piston engine's exhaust gas. The centripetal turbine wheel drives a centrifugal compressor wheel through a common rotating shaft. This wheel supercharges the engine air intake to a degree that can be controlled by means of a wastegate or by dynamically modifying the turbine housing's geometry (as in a variable geometry turbocharger).
It mainly serves as a power recovery device which converts a great deal of otherwise wasted thermal and kinetic energy into engine boost.
Turbo-compound engines (actually employed on some semi-trailer trucks) are fitted with blow down turbines which are similar in design and appearance to a turbocharger except for the turbine shaft being mechanically or hydraulically connected to the engine's crankshaft instead of to a centrifugal compressor, thus providing additional power instead of boost. While the turbocharger is a pressure turbine, a power recovery turbine is a velocity one.
Passenger road vehicles (cars, bikes, and buses)
A number of experiments have been conducted with gas turbine powered automobiles, the largest by Chrysler. More recently, there has been some interest in the use of turbine engines for hybrid electric cars. For instance, a consortium led by micro gas turbine company Bladon Jets has secured investment from the Technology Strategy Board to develop an Ultra Lightweight Range Extender (ULRE) for next-generation electric vehicles. The objective of the consortium, which includes luxury car maker Jaguar Land Rover and leading electrical machine company SR Drives, is to produce the world's first commercially viable – and environmentally friendly – gas turbine generator designed specifically for automotive applications.
The common turbocharger for gasoline or diesel engines is also a turbine derivative.
Concept cars
The first serious investigation of using a gas turbine in cars was in 1946 when two engineers, Robert Kafka and Robert Engerstein of Carney Associates, a New York engineering firm, came up with the concept where a unique compact turbine engine design would provide power for a rear wheel drive car. After an article appeared in Popular Science, there was no further work, beyond the paper stage.
Early concepts (1950s/60s)
In 1950, designer F.R. Bell and Chief Engineer Maurice Wilks from British car manufacturers Rover unveiled the first car powered with a gas turbine engine. The two-seater JET1 had the engine positioned behind the seats, air intake grilles on either side of the car, and exhaust outlets on the top of the tail. During tests, the car reached top speeds of , at a turbine speed of 50,000 rpm. After being shown in the United Kingdom and the United States in 1950, JET1 was further developed, and was subjected to speed trials on the Jabbeke highway in Belgium in June 1952, where it exceeded . The car ran on petrol, paraffin (kerosene) or diesel oil, but fuel consumption problems proved insurmountable for a production car. JET1 is on display at the London Science Museum.
A French turbine-powered car, the SOCEMA-Grégoire, was displayed at the October 1952 Paris Auto Show. It was designed by the French engineer Jean-Albert Grégoire.
The first turbine-powered car built in the US was the GM Firebird I which began evaluations in 1953. While photos of the Firebird I may suggest that the jet turbine's thrust propelled the car like an aircraft, the turbine actually drove the rear wheels. The Firebird I was never meant as a commercial passenger car and was built solely for testing & evaluation as well as public relation purposes. Additional Firebird concept cars, each powered by gas turbines, were developed for the 1953, 1956 and 1959 Motorama auto shows. The GM Research gas turbine engine also was fitted to a series of transit buses, starting with the Turbo-Cruiser I of 1953.
Starting in 1954 with a modified Plymouth, the American car manufacturer Chrysler demonstrated several prototype gas turbine-powered cars from the early 1950s through the early 1980s. Chrysler built fifty Chrysler Turbine Cars in 1963 and conducted the only consumer trial of gas turbine-powered cars. Each of their turbines employed a unique rotating recuperator, referred to as a regenerator that increased efficiency.
In 1954, Fiat unveiled a concept car with a turbine engine, called Fiat Turbina. This vehicle, looking like an aircraft with wheels, used a unique combination of both jet thrust and the engine driving the wheels. Speeds of were claimed.
In the 1960s, Ford and GM also were developing gas turbine semi-trucks. Ford displayed the Big Red at the 1964 World's Fair. With the trailer, it was long, high, and painted crimson red. It contained the Ford-developed gas turbine engine, with output power and torque of and . The cab boasted a highway map of the continental U.S., a mini-kitchen, bathroom, and a TV for the co-driver. The fate of the truck was unknown for several decades, but it was rediscovered in early 2021 in private hands, having been restored to running order. The Chevrolet division of GM built the Turbo Titan series of concept trucks with turbine motors as analogs of the Firebird concepts, including Turbo Titan I (, shares GT-304 engine with Firebird II), Turbo Titan II (, shares GT-305 engine with Firebird III), and Turbo Titan III (1965, GT-309 engine); in addition, the GM Bison gas turbine truck was shown at the 1964 World's Fair.
Emissions and fuel economy (1970s/80s)
As a result of the U.S. Clean Air Act Amendments of 1970, research was funded into developing automotive gas turbine technology. Design concepts and vehicles were conducted by Chrysler, General Motors, Ford (in collaboration with AiResearch), and American Motors (in conjunction with Williams Research). Long-term tests were conducted to evaluate comparable cost efficiency. Several AMC Hornets were powered by a small Williams regenerative gas turbine weighing and producing at 4450 rpm.
In 1982, General Motors used an Oldsmobile Delta 88 powered by a gas turbine using pulverised coal dust. This was considered for the United States and the western world to reduce dependence on middle east oil at the time
Toyota demonstrated several gas turbine powered concept cars, such as the Century gas turbine hybrid in 1975, the Sports 800 Gas Turbine Hybrid in 1979 and the GTV in 1985. No production vehicles were made. The GT24 engine was exhibited in 1977 without a vehicle.
Later development
In the early 1990s, Volvo introduced the Volvo ECC which was a gas turbine powered hybrid electric vehicle.
In 1993, General Motors developed a gas turbine powered EV1 series hybrid—as a prototype of the General Motors EV1. A Williams International 40 kW turbine drove an alternator which powered the battery–electric powertrain. The turbine design included a recuperator. In 2006, GM went into the EcoJet concept car project with Jay Leno.
At the 2010 Paris Motor Show Jaguar demonstrated its Jaguar C-X75 concept car. This electrically powered supercar has a top speed of and can go from in 3.4 seconds. It uses lithium-ion batteries to power four electric motors which combine to produce 780 bhp. It will travel on a single charge of the batteries, and uses a pair of Bladon Micro Gas Turbines to re-charge the batteries extending the range to .
Racing cars
The first race car (in concept only) fitted with a turbine was in 1955 by a US Air Force group as a hobby project with a turbine loaned them by Boeing and a race car owned by Firestone Tire & Rubber company. The first race car fitted with a turbine for the goal of actual racing was by Rover and the BRM Formula One team joined forces to produce the Rover-BRM, a gas turbine powered coupe, which entered the 1963 24 Hours of Le Mans, driven by Graham Hill and Richie Ginther. It averaged and had a top speed of . American Ray Heppenstall joined Howmet Corporation and McKee Engineering together to develop their own gas turbine sports car in 1968, the Howmet TX, which ran several American and European events, including two wins, and also participated in the 1968 24 Hours of Le Mans. The cars used Continental gas turbines, which eventually set six FIA land speed records for turbine-powered cars.
For open wheel racing, 1967's revolutionary STP-Paxton Turbocar fielded by racing and entrepreneurial legend Andy Granatelli and driven by Parnelli Jones nearly won the Indianapolis 500; the Pratt & Whitney ST6B-62 powered turbine car was almost a lap ahead of the second place car when a gearbox bearing failed just three laps from the finish line. The next year the STP Lotus 56 turbine car won the Indianapolis 500 pole position even though new rules restricted the air intake dramatically. In 1971 Team Lotus principal Colin Chapman introduced the Lotus 56B F1 car, powered by a Pratt & Whitney STN 6/76 gas turbine. Chapman had a reputation of building radical championship-winning cars, but had to abandon the project because there were too many problems with turbo lag.
Buses
General Motors fitted the GT-30x series of gas turbines (branded "Whirlfire") to several prototype buses in the 1950s and 1960s, including Turbo-Cruiser I (1953, GT-300); Turbo-Cruiser II (1964, GT-309); Turbo-Cruiser III (1968, GT-309); RTX (1968, GT-309); and RTS 3T (1972).
The arrival of the Capstone Turbine has led to several hybrid bus designs, starting with HEV-1 by AVS of Chattanooga, Tennessee in 1999, and closely followed by Ebus and ISE Research in California, and DesignLine Corporation in New Zealand (and later the United States). AVS turbine hybrids were plagued with reliability and quality control problems, resulting in liquidation of AVS in 2003. The most successful design by Designline is now operated in 5 cities in 6 countries, with over 30 buses in operation worldwide, and order for several hundred being delivered to Baltimore, and New York City.
Brescia Italy is using serial hybrid buses powered by microturbines on routes through the historical sections of the city.
Motorcycles
The MTT Turbine Superbike appeared in 2000 (hence the designation of Y2K Superbike by MTT) and is the first production motorcycle powered by a turbine engine – specifically, a Rolls-Royce Allison model 250 turboshaft engine, producing about 283 kW (380 bhp). Speed-tested to 365 km/h or 227 mph (according to some stories, the testing team ran out of road during the test), it holds the Guinness World Record for most powerful production motorcycle and most expensive production motorcycle, with a price tag of US$185,000.
Trains
Several locomotive classes have been powered by gas turbines, the most recent incarnation being Bombardier's JetTrain.
Tanks
The Third Reich Wehrmacht Heer's development division, the Heereswaffenamt (Army Ordnance Board), studied a number of gas turbine engine designs for use in tanks starting in mid-1944. The first gas turbine engine design intended for use in armored fighting vehicle propulsion, the BMW 003-based GT 101, was meant for installation in the Panther tank. Towards the end of the war, a Jagdtiger was fitted with one of the aforementioned gas turbines.
The second use of a gas turbine in an armored fighting vehicle was in 1954 when a unit, PU2979, specifically developed for tanks by C. A. Parsons and Company, was installed and trialed in a British Conqueror tank. The Stridsvagn 103 was developed in the 1950s and was the first mass-produced main battle tank to use a turbine engine, the Boeing T50. Since then, gas turbine engines have been used as auxiliary power units in some tanks and as main powerplants in Soviet/Russian T-80s and U.S. M1 Abrams tanks, among others. They are lighter and smaller than diesel engines at the same sustained power output but the models installed to date are less fuel efficient than the equivalent diesel, especially at idle, requiring more fuel to achieve the same combat range. Successive models of M1 have addressed this problem with battery packs or secondary generators to power the tank's systems while stationary, saving fuel by reducing the need to idle the main turbine. T-80s can mount three large external fuel drums to extend their range. Russia has stopped production of the T-80 in favor of the diesel-powered T-90 (based on the T-72), while Ukraine has developed the diesel-powered T-80UD and T-84 with nearly the power of the gas-turbine tank. The French Leclerc tank's diesel powerplant features the "Hyperbar" hybrid supercharging system, where the engine's turbocharger is completely replaced with a small gas turbine which also works as an assisted diesel exhaust turbocharger, enabling engine RPM-independent boost level control and a higher peak boost pressure to be reached (than with ordinary turbochargers). This system allows a smaller displacement and lighter engine to be used as the tank's power plant and effectively removes turbo lag. This special gas turbine/turbocharger can also work independently from the main engine as an ordinary APU.
A turbine is theoretically more reliable and easier to maintain than a piston engine since it has a simpler construction with fewer moving parts, but in practice, turbine parts experience a higher wear rate due to their higher working speeds. The turbine blades are highly sensitive to dust and fine sand so that in desert operations air filters have to be fitted and changed several times daily. An improperly fitted filter, or a bullet or shell fragment that punctures the filter, can damage the engine. Piston engines (especially if turbocharged) also need well-maintained filters, but they are more resilient if the filter does fail.
Like most modern diesel engines used in tanks, gas turbines are usually multi-fuel engines.
Marine applications
Naval
Gas turbines are used in many naval vessels, where they are valued for their high power-to-weight ratio and their ships' resulting acceleration and ability to get underway quickly.
The first gas-turbine-powered naval vessel was the Royal Navy's motor gunboat MGB 2009 (formerly MGB 509) converted in 1947. Metropolitan-Vickers fitted their F2/3 jet engine with a power turbine. The Steam Gun Boat Grey Goose was converted to Rolls-Royce gas turbines in 1952 and operated as such from 1953. The Bold class Fast Patrol Boats Bold Pioneer and Bold Pathfinder built in 1953 were the first ships created specifically for gas turbine propulsion.
The first large-scale, partially gas-turbine powered ships were the Royal Navy's Type 81 (Tribal class) frigates with combined steam and gas powerplants. The first, was commissioned in 1961.
The German Navy launched the first in 1961 with 2 Brown, Boveri & Cie gas turbines in the world's first combined diesel and gas propulsion system.
The Soviet Navy commissioned in 1962 the first of 25 with 4 gas turbines in combined gas and gas propulsion system. Those vessels used 4 M8E gas turbines, which generated . Those ships were the first large ships in the world to be powered solely by gas turbines.
The Danish Navy had 6 Søløven-class torpedo boats (the export version of the British Brave class fast patrol boat) in service from 1965 to 1990, which had 3 Bristol Proteus (later RR Proteus) Marine Gas Turbines rated at combined, plus two General Motors Diesel engines, rated at , for better fuel economy at slower speeds. And they also produced 10 Willemoes Class Torpedo / Guided Missile boats (in service from 1974 to 2000) which had 3 Rolls-Royce Marine Proteus Gas Turbines also rated at , same as the Søløven-class boats, and 2 General Motors Diesel Engines, rated at , also for improved fuel economy at slow speeds.
The Swedish Navy produced 6 Spica-class torpedo boats between 1966 and 1967 powered by 3 Bristol Siddeley Proteus 1282 turbines, each delivering . They were later joined by 12 upgraded Norrköping class ships, still with the same engines. With their aft torpedo tubes replaced by antishipping missiles they served as missile boats until the last was retired in 2005.
The Finnish Navy commissioned two corvettes, Turunmaa and Karjala, in 1968. They were equipped with one Rolls-Royce Olympus TM1 gas turbine and three Wärtsilä marine diesels for slower speeds. They were the fastest vessels in the Finnish Navy; they regularly achieved speeds of 35 knots, and 37.3 knots during sea trials. The Turunmaas were decommissioned in 2002. Karjala is today a museum ship in Turku, and Turunmaa serves as a floating machine shop and training ship for Satakunta Polytechnical College.
The next series of major naval vessels were the four Canadian helicopter carrying destroyers first commissioned in 1972. They used 2 ft-4 main propulsion engines, 2 ft-12 cruise engines and 3 Solar Saturn 750 kW generators.
The first U.S. gas-turbine powered ship was the U.S. Coast Guard's , a cutter commissioned in 1961 that was powered by two turbines utilizing controllable-pitch propellers. The larger High Endurance Cutters, was the first class of larger cutters to utilize gas turbines, the first of which () was commissioned in 1967. Since then, they have powered the U.S. Navy's s, and s, and guided missile cruisers. , a modified , is to be the Navy's first amphibious assault ship powered by gas turbines.
The marine gas turbine operates in a more corrosive atmosphere due to the presence of sea salt in air and fuel and use of cheaper fuels.
Civilian maritime
Up to the late 1940s, much of the progress on marine gas turbines all over the world took place in design offices and engine builder's workshops and development work was led by the British Royal Navy and other Navies. While interest in the gas turbine for marine purposes, both naval and mercantile, continued to increase, the lack of availability of the results of operating experience on early gas turbine projects limited the number of new ventures on seagoing commercial vessels being embarked upon.
In 1951, the diesel–electric oil tanker Auris, 12,290 deadweight tonnage (DWT) was used to obtain operating experience with a main propulsion gas turbine under service conditions at sea and so became the first ocean-going merchant ship to be powered by a gas turbine. Built by Hawthorn Leslie at Hebburn-on-Tyne, UK, in accordance with plans and specifications drawn up by the Anglo-Saxon Petroleum Company and launched on the UK's Princess Elizabeth's 21st birthday in 1947, the ship was designed with an engine room layout that would allow for the experimental use of heavy fuel in one of its high-speed engines, as well as the future substitution of one of its diesel engines by a gas turbine. The Auris operated commercially as a tanker for three-and-a-half years with a diesel–electric propulsion unit as originally commissioned, but in 1951 one of its four diesel engines – which were known as "Faith", "Hope", "Charity" and "Prudence" – was replaced by the world's first marine gas turbine engine, a open-cycle gas turbo-alternator built by British Thompson-Houston Company in Rugby. Following successful sea trials off the Northumbrian coast, the Auris set sail from Hebburn-on-Tyne in October 1951 bound for Port Arthur in the US and then Curaçao in the southern Caribbean returning to Avonmouth after 44 days at sea, successfully completing her historic trans-Atlantic crossing. During this time at sea the gas turbine burnt diesel fuel and operated without an involuntary stop or mechanical difficulty of any kind. She subsequently visited Swansea, Hull, Rotterdam, Oslo and Southampton covering a total of 13,211 nautical miles. The Auris then had all of its power plants replaced with a directly coupled gas turbine to become the first civilian ship to operate solely on gas turbine power.
Despite the success of this early experimental voyage the gas turbine did not replace the diesel engine as the propulsion plant for large merchant ships. At constant cruising speeds the diesel engine simply had no peer in the vital area of fuel economy. The gas turbine did have more success in Royal Navy ships and the other naval fleets of the world where sudden and rapid changes of speed are required by warships in action.
The United States Maritime Commission were looking for options to update WWII Liberty ships, and heavy-duty gas turbines were one of those selected. In 1956 the John Sergeant was lengthened and equipped with a General Electric HD gas turbine with exhaust-gas regeneration, reduction gearing and a variable-pitch propeller. It operated for 9,700 hours using residual fuel (Bunker C) for 7,000 hours. Fuel efficiency was on a par with steam propulsion at per hour, and power output was higher than expected at due to the ambient temperature of the North Sea route being lower than the design temperature of the gas turbine. This gave the ship a speed capability of 18 knots, up from 11 knots with the original power plant, and well in excess of the 15 knot targeted. The ship made its first transatlantic crossing with an average speed of 16.8 knots, in spite of some rough weather along the way. Suitable Bunker C fuel was only available at limited ports because the quality of the fuel was of a critical nature. The fuel oil also had to be treated on board to reduce contaminants and this was a labor-intensive process that was not suitable for automation at the time. Ultimately, the variable-pitch propeller, which was of a new and untested design, ended the trial, as three consecutive annual inspections revealed stress-cracking. This did not reflect poorly on the marine-propulsion gas-turbine concept though, and the trial was a success overall. The success of this trial opened the way for more development by GE on the use of HD gas turbines for marine use with heavy fuels. The John Sergeant was scrapped in 1972 at Portsmouth PA.
Boeing launched its first passenger-carrying waterjet-propelled hydrofoil Boeing 929, in April 1974. Those ships were powered by two Allison 501-KF gas turbines.
Between 1971 and 1981, Seatrain Lines operated a scheduled container service between ports on the eastern seaboard of the United States and ports in northwest Europe across the North Atlantic with four container ships of 26,000 tonnes DWT. Those ships were powered by twin Pratt & Whitney gas turbines of the FT 4 series. The four ships in the class were named Euroliner, Eurofreighter, Asialiner and Asiafreighter. Following the dramatic Organization of the Petroleum Exporting Countries (OPEC) price increases of the mid-1970s, operations were constrained by rising fuel costs. Some modification of the engine systems on those ships was undertaken to permit the burning of a lower grade of fuel (i.e., marine diesel). Reduction of fuel costs was successful using a different untested fuel in a marine gas turbine but maintenance costs increased with the fuel change. After 1981 the ships were sold and refitted with, what at the time, was more economical diesel-fueled engines but the increased engine size reduced cargo space.
The first passenger ferry to use a gas turbine was the GTS Finnjet, built in 1977 and powered by two Pratt & Whitney FT 4C-1 DLF turbines, generating and propelling the ship to a speed of 31 knots. However, the Finnjet also illustrated the shortcomings of gas turbine propulsion in commercial craft, as high fuel prices made operating her unprofitable. After four years of service, additional diesel engines were installed on the ship to reduce running costs during the off-season. The Finnjet was also the first ship with a combined diesel–electric and gas propulsion. Another example of commercial use of gas turbines in a passenger ship is Stena Line's HSS class fastcraft ferries. HSS 1500-class Stena Explorer, Stena Voyager and Stena Discovery vessels use combined gas and gas setups of twin GE LM2500 plus GE LM1600 power for a total of . The slightly smaller HSS 900-class Stena Carisma, uses twin ABB–STAL GT35 turbines rated at gross. The Stena Discovery was withdrawn from service in 2007, another victim of too high fuel costs.
In July 2000, the Millennium became the first cruise ship to be powered by both gas and steam turbines. The ship featured two General Electric LM2500 gas turbine generators whose exhaust heat was used to operate a steam turbine generator in a COGES (combined gas electric and steam) configuration. Propulsion was provided by two electrically driven Rolls-Royce Mermaid azimuth pods. The liner uses a combined diesel and gas configuration.
In marine racing applications the 2010 C5000 Mystic catamaran Miss GEICO uses two Lycoming T-55 turbines for its power system.
Advances in technology
Gas turbine technology has steadily advanced since its inception and continues to evolve. Development is actively producing both smaller gas turbines and more powerful and efficient engines. Aiding in these advances are computer-based design (specifically computational fluid dynamics and finite element analysis) and the development of advanced materials: Base materials with superior high-temperature strength (e.g., single-crystal superalloys that exhibit yield strength anomaly) or thermal barrier coatings that protect the structural material from ever-higher temperatures. These advances allow higher compression ratios and turbine inlet temperatures, more efficient combustion and better cooling of engine parts.
Computational fluid dynamics (CFD) has contributed to substantial improvements in the performance and efficiency of gas turbine engine components through enhanced understanding of the complex viscous flow and heat transfer phenomena involved. For this reason, CFD is one of the key computational tools used in design and development of gas turbine engines.
The simple-cycle efficiencies of early gas turbines were practically doubled by incorporating inter-cooling, regeneration (or recuperation), and reheating. These improvements, of course, come at the expense of increased initial and operation costs, and they cannot be justified unless the decrease in fuel costs offsets the increase in other costs. The relatively low fuel prices, the general desire in the industry to minimize installation costs, and the tremendous increase in the simple-cycle efficiency to about 40 percent left little desire for opting for these modifications.
On the emissions side, the challenge is to increase turbine inlet temperatures while at the same time reducing peak flame temperature in order to achieve lower NOx emissions and meet the latest emission regulations. In May 2011, Mitsubishi Heavy Industries achieved a turbine inlet temperature of on a 320 megawatt gas turbine, and 460 MW in gas turbine combined-cycle power generation applications in which gross thermal efficiency exceeds 60%.
Compliant foil bearings were commercially introduced to gas turbines in the 1990s. These can withstand over a hundred thousand start/stop cycles and have eliminated the need for an oil system. The application of microelectronics and power switching technology have enabled the development of commercially viable electricity generation by microturbines for distribution and vehicle propulsion.
In 2013, General Electric started the development of the GE9X with a compression ratio of 61:1.
Advantages and disadvantages
The following are advantages and disadvantages of gas-turbine engines:
Advantages include:
Very high power-to-weight ratio compared to reciprocating engines.
Smaller than most reciprocating engines of the same power rating.
Smooth rotation of the main shaft produces far less vibration than a reciprocating engine.
Fewer moving parts than reciprocating engines results in lower maintenance cost and higher reliability/availability over its service life.
Greater reliability, particularly in applications where sustained high power output is required.
Waste heat is dissipated almost entirely in the exhaust. This results in a high-temperature exhaust stream that is very usable for boiling water in a combined cycle, or for cogeneration.
Lower peak combustion pressures than reciprocating engines in general.
High shaft speeds in smaller "free turbine units", although larger gas turbines employed in power generation operate at synchronous speeds.
Low lubricating oil cost and consumption.
Can run on a wide variety of fuels.
Very low toxic emissions of CO and HC due to excess air, complete combustion and no "quench" of the flame on cold surfaces.
Disadvantages include:
Core engine costs can be high due to the use of exotic materials, especially in applications where high reliability is required (e.g. aircraft propulsion)
Less efficient than reciprocating engines at idle speed.
Longer startup than reciprocating engines.
Less responsive to changes in power demand compared with reciprocating engines.
Characteristic whine can be hard to suppress. The exhaust (particularly on turbojets) can also produce a distinctive roaring sound.
Major manufacturers
Siemens Energy
Ansaldo
Mitsubishi Heavy Industries
Rolls-Royce
GE Aviation
Silmash
ODK
Pratt & Whitney
P&W Canada
Solar Turbines
Alstom
Zorya-Mashproekt
MTU Aero Engines
MAN Turbo
IHI Corporation
Kawasaki Heavy Industries
HAL
BHEL
MAPNA
Techwin
Doosan Heavy
Shanghai Electric
Harbin Electric
AECC
Testing
British, German, other national and international test codes are used to standardize the procedures and definitions used to test gas turbines. Selection of the test code to be used is an agreement between the purchaser and the manufacturer, and has some significance to the design of the turbine and associated systems. In the United States, ASME has produced several performance test codes on gas turbines. This includes ASME PTC 22–2014. These ASME performance test codes have gained international recognition and acceptance for testing gas turbines. The single most important and differentiating characteristic of ASME performance test codes, including PTC 22, is that the test uncertainty of the measurement indicates the quality of the test and is not to be used as a commercial tolerance.
| Technology | Electricity generation and distribution | null |
58673 | https://en.wikipedia.org/wiki/Liquid%20hydrogen | Liquid hydrogen | Liquid hydrogen () is the liquid state of the element hydrogen. Hydrogen is found naturally in the molecular H2 form.
To exist as a liquid, H2 must be cooled below its critical point of 33 K. However, for it to be in a fully liquid state at atmospheric pressure, H2 needs to be cooled to . A common method of obtaining liquid hydrogen involves a compressor resembling a jet engine in both appearance and principle. Liquid hydrogen is typically used as a concentrated form of hydrogen storage. Storing it as liquid takes less space than storing it as a gas at normal temperature and pressure. However, the liquid density is very low compared to other common fuels. Once liquefied, it can be maintained as a liquid for some time in thermally insulated containers.
There are two spin isomers of hydrogen; whereas room temperature hydrogen is mostly orthohydrogen, liquid hydrogen consists of 99.79% parahydrogen and 0.21% orthohydrogen.
Hydrogen requires a theoretical minimum of to liquefy, and including converting the hydrogen to the para isomer, but practically generally takes compared to a heating value of hydrogen.
History
In 1885, Zygmunt Florenty Wróblewski published hydrogen's critical temperature as ; critical pressure, ; and boiling point, .
Hydrogen was liquefied by James Dewar in 1898 by using regenerative cooling and his invention, the vacuum flask. The first synthesis of the stable isomer form of liquid hydrogen, parahydrogen, was achieved by Paul Harteck and Karl Friedrich Bonhoeffer in 1929.
Spin isomers of hydrogen
The two nuclei in a dihydrogen molecule can have two different spin states.
Parahydrogen, in which the two nuclear spins are antiparallel, is more stable than orthohydrogen, in which the two are parallel. At room temperature, gaseous hydrogen is mostly in the ortho isomeric form due to thermal energy, but an ortho-enriched mixture is only metastable when liquified at low temperature. It slowly undergoes an exothermic reaction to become the para isomer, with enough energy released as heat to cause some of the liquid to boil. To prevent loss of the liquid during long-term storage, it is therefore intentionally converted to the para isomer as part of the production process, typically using a catalyst such as iron(III) oxide, activated carbon, platinized asbestos, rare earth metals, uranium compounds, chromium(III) oxide, or some nickel compounds.
Uses
Liquid hydrogen is a common liquid rocket fuel for rocketry application and is used by NASA and the U.S. Air Force, which operate a large number of liquid hydrogen tanks with an individual capacity up to 3.8 million liters (1 million U.S. gallons).
In most rocket engines fueled by liquid hydrogen, it first cools the nozzle and other parts before being mixed with the oxidizer, usually liquid oxygen, and burned to produce water with traces of ozone and hydrogen peroxide. Practical H2–O2 rocket engines run fuel-rich so that the exhaust contains some unburned hydrogen. This reduces combustion chamber and nozzle erosion. It also reduces the molecular weight of the exhaust, which can increase specific impulse, despite the incomplete combustion.
Liquid hydrogen can be used as the fuel for an internal combustion engine or fuel cell. Various submarines, including the Type 212 submarine, Type 214 submarine, and others, and concept hydrogen vehicles have been built using this form of hydrogen, such as the DeepC, BMW H2R, and others. Due to its similarity, builders can sometimes modify and share equipment with systems designed for liquefied natural gas (LNG). Liquid hydrogen is being investigated as a zero carbon fuel for aircraft. Because of the lower volumetric energy, the hydrogen volumes needed for combustion are large. Unless direct injection is used, a severe gas-displacement effect also hampers maximum breathing and increases pumping losses.
Liquid hydrogen is also used to cool neutrons to be used in neutron scattering. Since neutrons and hydrogen nuclei have similar masses, kinetic energy exchange per interaction is maximum (elastic collision). Finally, superheated liquid hydrogen was used in many bubble chamber experiments.
The first thermonuclear bomb, Ivy Mike, used liquid deuterium, also known as hydrogen-2, for nuclear fusion.
Properties
The product of hydrogen combustion in a pure oxygen environment is solely water vapor. However, the high combustion temperatures and present atmospheric nitrogen can result in the breaking of N≡N bonds, forming toxic NOx if no exhaust scrubbing is done. Since water is often considered harmless to the environment, an engine burning it can be considered "zero emissions". In aviation, however, water vapor emitted in the atmosphere contributes to global warming (to a lesser extent than CO2). Liquid hydrogen also has a much higher specific energy than gasoline, natural gas, or diesel.
The density of liquid hydrogen is only 70.85 kg/m3 (at 20 K), a relative density of just 0.07. Although the specific energy is more than twice that of other fuels, this gives it a remarkably low volumetric energy density, many fold lower.
Liquid hydrogen requires cryogenic storage technology such as special thermally insulated containers and requires special handling common to all cryogenic fuels. This is similar to, but more severe than liquid oxygen. Even with thermally insulated containers it is difficult to keep such a low temperature, and the hydrogen will gradually leak away (typically at a rate of 1% per day). It also shares many of the same safety issues as other forms of hydrogen, as well as being cold enough to liquefy, or even solidify atmospheric oxygen, which can be an explosion hazard.
The triple point of hydrogen is at 13.81 K and 7.042 kPa.
Safety
Due to its cold temperatures, liquid hydrogen is a hazard for cold burns. Hydrogen itself is biologically inert and its only human health hazard as a vapor is displacement of oxygen, resulting in asphyxiation, and its very high flammability and ability to detonate when mixed with air. Because of its flammability, liquid hydrogen should be kept away from heat or flame unless ignition is intended. Unlike ambient-temperature gaseous hydrogen, which is lighter than air, hydrogen recently vaporized from liquid is so cold that it is heavier than air and can form flammable heavier-than-air air–hydrogen mixtures.
| Physical sciences | s-Block | Chemistry |
58685 | https://en.wikipedia.org/wiki/Hypothalamus | Hypothalamus | The hypothalamus (: hypothalami; ) is a small part of the vertebrate brain that contains a number of nuclei with a variety of functions. One of the most important functions is to link the nervous system to the endocrine system via the pituitary gland. The hypothalamus is located below the thalamus and is part of the limbic system. It forms the basal part of the diencephalon. All vertebrate brains contain a hypothalamus. In humans, it is about the size of an almond.
The hypothalamus has the function of regulating certain metabolic processes and other activities of the autonomic nervous system. It synthesizes and secretes certain neurohormones, called releasing hormones or hypothalamic hormones, and these in turn stimulate or inhibit the secretion of hormones from the pituitary gland. The hypothalamus controls body temperature, hunger, important aspects of parenting and maternal attachment behaviours, thirst, fatigue, sleep, circadian rhythms, and is important in certain social behaviors, such as sexual and aggressive behaviors.
Structure
The hypothalamus is divided into four regions (preoptic, supraoptic, tuberal, mammillary) in a parasagittal plane, indicating location anterior-posterior; and three zones (periventricular, intermediate, lateral) in the coronal plane, indicating location medial-lateral. Hypothalamic nuclei are located within these specific regions and zones. It is found in all vertebrate nervous systems. In mammals, magnocellular neurosecretory cells in the paraventricular nucleus and the supraoptic nucleus of the hypothalamus produce neurohypophysial hormones, oxytocin and vasopressin. These hormones are released into the blood in the posterior pituitary. Much smaller parvocellular neurosecretory cells, neurons of the paraventricular nucleus, release corticotropin-releasing hormone and other hormones into the hypophyseal portal system, where these hormones diffuse to the anterior pituitary.
Nuclei
The hypothalamic nuclei include the following:
Connections
The hypothalamus is highly interconnected with other parts of the central nervous system, in particular the brainstem and its reticular formation. As part of the limbic system, it has connections to other limbic structures including the amygdala and septum, and is also connected with areas of the autonomous nervous system.
The hypothalamus receives many inputs from the brainstem, the most notable from the nucleus of the solitary tract, the locus coeruleus, and the ventrolateral medulla.
Most nerve fibres within the hypothalamus run in two ways (bidirectional).
Projections to areas caudal to the hypothalamus go through the medial forebrain bundle, the mammillotegmental tract and the dorsal longitudinal fasciculus.
Projections to areas rostral to the hypothalamus are carried by the mammillothalamic tract, the fornix and terminal stria.
Projections to areas of the sympathetic motor system (lateral horn spinal segments T1–L2/L3) are carried by the hypothalamospinal tract and they activate the sympathetic motor pathway.
Sexual dimorphism
Several hypothalamic nuclei are sexually dimorphic; i.e., there are clear differences in both structure and function between males and females. Some differences are apparent even in gross neuroanatomy: most notable is the sexually dimorphic nucleus within the preoptic area, in which the differences are subtle changes in the connectivity and chemical sensitivity of particular sets of neurons. The importance of these changes can be recognized by functional differences between males and females. For instance, males of most species prefer the odor and appearance of females over males, which is instrumental in stimulating male sexual behavior. If the sexually dimorphic nucleus is lesioned, this preference for females by males diminishes. Also, the pattern of secretion of growth hormone is sexually dimorphic; this is why in many species, adult males are visibly distinct sizes from females.
Responsiveness to ovarian steroids
Other striking functional dimorphisms are in the behavioral responses to ovarian steroids of the adult. Males and females respond to ovarian steroids in different ways, partly because the expression of estrogen-sensitive neurons in the hypothalamus is sexually dimorphic; i.e., estrogen receptors are expressed in different sets of neurons.
Estrogen and progesterone can influence gene expression in particular neurons or induce changes in cell membrane potential and kinase activation, leading to diverse non-genomic cellular functions. Estrogen and progesterone bind to their cognate nuclear hormone receptors, which translocate to the cell nucleus and interact with regions of DNA known as hormone response elements (HREs) or get tethered to another transcription factor's binding site. Estrogen receptor (ER) has been shown to transactivate other transcription factors in this manner, despite the absence of an estrogen response element (ERE) in the proximal promoter region of the gene. In general, ERs and progesterone receptors (PRs) are gene activators, with increased mRNA and subsequent protein synthesis following hormone exposure.
Male and female brains differ in the distribution of estrogen receptors, and this difference is an irreversible consequence of neonatal steroid exposure. Estrogen receptors (and progesterone receptors) are found mainly in neurons in the anterior and mediobasal hypothalamus, notably:
the preoptic area (where LHRH neurons are located, regulating dopamine responses and maternal behavior;
the periventricular nucleus where somatostatin neurons are located, regulating stress levels;
the ventromedial hypothalamus which regulates hunger and sexual arousal.
Development
In neonatal life, gonadal steroids influence the development of the neuroendocrine hypothalamus. For instance, they determine the ability of females to exhibit a normal reproductive cycle, and of males and females to display appropriate reproductive behaviors in adult life.
If a female rat is injected once with testosterone in the first few days of postnatal life (during the "critical period" of sex-steroid influence), the hypothalamus is irreversibly masculinized; the adult rat will be incapable of generating an LH surge in response to estrogen (a characteristic of females), but will be capable of exhibiting male sexual behaviors (mounting a sexually receptive female).
By contrast, a male rat castrated just after birth will be feminized, and the adult will show female sexual behavior in response to estrogen (sexual receptivity, lordosis behavior).
In primates, the developmental influence of androgens is less clear, and the consequences are less understood. Within the brain, testosterone is aromatized (to estradiol), which is the principal active hormone for developmental influences. The human testis secretes high levels of testosterone from about week eight of fetal life until five to six months after birth (a similar perinatal surge in testosterone is observed in many species), a process that appears to underlie the male phenotype. Estrogen from the maternal circulation is relatively ineffective, partly because of the high circulating levels of steroid-binding proteins in pregnancy.
Sex steroids are not the only important influences upon hypothalamic development; in particular, pre-pubertal stress in early life (of rats) determines the capacity of the adult hypothalamus to respond to an acute stressor. Unlike gonadal steroid receptors, glucocorticoid receptors are very widespread throughout the brain; in the paraventricular nucleus, they mediate negative feedback control of CRF synthesis and secretion, but elsewhere their role is not well understood.
Function
Hormone release
The hypothalamus has a central neuroendocrine function, most notably by its control of the anterior pituitary, which in turn regulates various endocrine glands and organs. Releasing hormones (also called releasing factors) are produced in hypothalamic nuclei then transported along axons to either the median eminence or the posterior pituitary, where they are stored and released as needed.
Anterior pituitary
In the hypothalamic–adenohypophyseal axis, releasing hormones, also known as hypophysiotropic or hypothalamic hormones, are released from the median eminence, a prolongation of the hypothalamus, into the hypophyseal portal system, which carries them to the anterior pituitary where they exert their regulatory functions on the secretion of adenohypophyseal hormones. These hypophysiotropic hormones are stimulated by parvocellular neurosecretory cells located in the periventricular area of the hypothalamus. After their release into the capillaries of the third ventricle, the hypophysiotropic hormones travel through what is known as the hypothalamo-pituitary portal circulation. Once they reach their destination in the anterior pituitary, these hormones bind to specific receptors located on the surface of pituitary cells. Depending on which cells are activated through this binding, the pituitary will either begin secreting or stop secreting hormones into the rest of the bloodstream.
Other hormones secreted from the median eminence include vasopressin, oxytocin, and neurotensin.
Posterior pituitary
In the hypothalamic–pituitary–adrenal axis, neurohypophysial hormones are released from the posterior pituitary, which is actually a prolongation of the hypothalamus, into the circulation.
It is also known that hypothalamic–pituitary–adrenal axis (HPA) hormones are related to certain skin diseases and skin homeostasis. There is evidence linking hyperactivity of HPA hormones to stress-related skin diseases and skin tumors.
Stimulation
The hypothalamus coordinates many hormonal and behavioural circadian rhythms, complex patterns of neuroendocrine outputs, complex homeostatic mechanisms, and important behaviours. The hypothalamus must, therefore, respond to many different signals, some of which are generated externally and some internally. Delta wave signalling arising either in the thalamus or in the cortex influences the secretion of releasing hormones; GHRH and prolactin are stimulated whilst TRH is inhibited.
The hypothalamus is responsive to:
Light: daylength and photoperiod for regulating circadian and seasonal rhythms
Olfactory stimuli, including pheromones
Steroids, including gonadal steroids and corticosteroids
Neurally transmitted information arising in particular from the heart, enteric nervous system (of the gastrointestinal tract), and the reproductive tract.
Autonomic inputs
Blood-borne stimuli, including leptin, ghrelin, angiotensin, insulin, pituitary hormones, cytokines, plasma concentrations of glucose and osmolarity etc.
Stress
Invading microorganisms by increasing body temperature, resetting the body's thermostat upward.
Olfactory stimuli
Olfactory stimuli are important for sexual reproduction and neuroendocrine function in many species. For instance, if a pregnant mouse is exposed to the urine of a 'strange' male during a critical period after coitus then the pregnancy fails (the Bruce effect). Thus, during coitus, a female mouse forms a precise 'olfactory memory' of her partner that persists for several days. Pheromonal cues aid synchronization of oestrus in many species; in women, synchronized menstruation may also arise from pheromonal cues, although the role of pheromones in humans is disputed.
Blood-borne stimuli
Peptide hormones have important influences upon the hypothalamus, and to do so they must pass through the blood–brain barrier. The hypothalamus is bounded in part by specialized brain regions that lack an effective blood–brain barrier; the capillary endothelium at these sites is fenestrated to allow free passage of even large proteins and other molecules. Some of these sites are the sites of neurosecretion - the neurohypophysis and the median eminence. However, others are sites at which the brain samples the composition of the blood. Two of these sites, the SFO (subfornical organ) and the OVLT (organum vasculosum of the lamina terminalis) are so-called circumventricular organs, where neurons are in intimate contact with both blood and CSF. These structures are densely vascularized, and contain osmoreceptive and sodium-receptive neurons that control drinking, vasopressin release, sodium excretion, and sodium appetite. They also contain neurons with receptors for angiotensin, atrial natriuretic factor, endothelin and relaxin, each of which important in the regulation of fluid and electrolyte balance. Neurons in the OVLT and SFO project to the supraoptic nucleus and paraventricular nucleus, and also to preoptic hypothalamic areas. The circumventricular organs may also be the site of action of interleukins to elicit both fever and ACTH secretion, via effects on paraventricular neurons.
It is not clear how all peptides that influence hypothalamic activity gain the necessary access. In the case of prolactin and leptin, there is evidence of active uptake at the choroid plexus from the blood into the cerebrospinal fluid (CSF). Some pituitary hormones have a negative feedback influence upon hypothalamic secretion; for example, growth hormone feeds back on the hypothalamus, but how it enters the brain is not clear. There is also evidence for central actions of prolactin.
Findings have suggested that thyroid hormone (T4) is taken up by the hypothalamic glial cells in the infundibular nucleus/ median eminence, and that it is here converted into T3 by the type 2 deiodinase (D2). Subsequent to this, T3 is transported into the thyrotropin-releasing hormone (TRH)-producing neurons in the paraventricular nucleus. Thyroid hormone receptors have been found in these neurons, indicating that they are indeed sensitive to T3 stimuli. In addition, these neurons expressed MCT8, a thyroid hormone transporter, supporting the theory that T3 is transported into them. T3 could then bind to the thyroid hormone receptor in these neurons and affect the production of thyrotropin-releasing hormone, thereby regulating thyroid hormone production.
The hypothalamus functions as a type of thermostat for the body. It sets a desired body temperature, and stimulates either heat production and retention to raise the blood temperature to a higher setting or sweating and vasodilation to cool the blood to a lower temperature. All fevers result from a raised setting in the hypothalamus; elevated body temperatures due to any other cause are classified as hyperthermia. Rarely, direct damage to the hypothalamus, such as from a stroke, will cause a fever; this is sometimes called a hypothalamic fever. However, it is more common for such damage to cause abnormally low body temperatures.
Steroids
The hypothalamus contains neurons that react strongly to steroids and glucocorticoids (the steroid hormones of the adrenal gland, released in response to ACTH). It also contains specialized glucose-sensitive neurons (in the arcuate nucleus and ventromedial hypothalamus), which are important for appetite. The preoptic area contains thermosensitive neurons; these are important for TRH secretion.
Neural
Oxytocin secretion in response to suckling or vagino-cervical stimulation is mediated by some of these pathways; vasopressin secretion in response to cardiovascular stimuli arising from chemoreceptors in the carotid body and aortic arch, and from low-pressure atrial volume receptors, is mediated by others. In the rat, stimulation of the vagina also causes prolactin secretion, and this results in pseudo-pregnancy following an infertile mating. In the rabbit, coitus elicits reflex ovulation. In the sheep, cervical stimulation in the presence of high levels of estrogen can induce maternal behavior in a virgin ewe. These effects are all mediated by the hypothalamus, and the information is carried mainly by spinal pathways that relay in the brainstem. Stimulation of the nipples stimulates release of oxytocin and prolactin and suppresses the release of LH and FSH.
Cardiovascular stimuli are carried by the vagus nerve. The vagus also conveys a variety of visceral information, including for instance signals arising from gastric distension or emptying, to suppress or promote feeding, by signalling the release of leptin or gastrin, respectively. Again, this information reaches the hypothalamus via relays in the brainstem.
In addition, hypothalamic function is responsive to—and regulated by—levels of all three classical monoamine neurotransmitters, noradrenaline, dopamine, and serotonin (5-hydroxytryptamine), in those tracts from which it receives innervation. For example, noradrenergic inputs arising from the locus coeruleus have important regulatory effects upon corticotropin-releasing hormone (CRH) levels.
Control of food intake
The extreme lateral part of the ventromedial nucleus of the hypothalamus is responsible for the control of food intake. Stimulation of this area causes increased food intake. Bilateral lesion of this area causes complete cessation of food intake. Medial parts of the nucleus have a controlling effect on the lateral part. Bilateral lesion of the medial part of the ventromedial nucleus causes hyperphagia and obesity of the animal. Further lesion of the lateral part of the ventromedial nucleus in the same animal produces complete cessation of food intake.
There are different hypotheses related to this regulation:
Lipostatic hypothesis: This hypothesis holds that adipose tissue produces a humoral signal that is proportionate to the amount of fat and acts on the hypothalamus to decrease food intake and increase energy output. It has been evident that a hormone leptin acts on the hypothalamus to decrease food intake and increase energy output.
Gutpeptide hypothesis: gastrointestinal hormones like Grp, glucagons, CCK and others claimed to inhibit food intake. The food entering the gastrointestinal tract triggers the release of these hormones, which act on the brain to produce satiety. The brain contains both CCK-A and CCK-B receptors.
Glucostatic hypothesis: The activity of the satiety center in the ventromedial nuclei is probably governed by the glucose utilization in the neurons. It has been postulated that when their glucose utilization is low and consequently when the arteriovenous blood glucose difference across them is low, the activity across the neurons decrease. Under these conditions, the activity of the feeding center is unchecked and the individual feels hungry. Food intake is rapidly increased by intraventricular administration of 2-deoxyglucose therefore decreasing glucose utilization in cells.
Thermostatic hypothesis: According to this hypothesis, a decrease in body temperature below a given set-point stimulates appetite, whereas an increase above the set-point inhibits appetite.
Fear processing
The medial zone of hypothalamus is part of a circuitry that controls motivated behaviors, like defensive behaviors. Analyses of Fos-labeling showed that a series of nuclei in the "behavioral control column" is important in regulating the expression of innate and conditioned defensive behaviors.
Antipredatory defensive behavior
Exposure to a predator (such as a cat) elicits defensive behaviors in laboratory rodents, even when the animal has never been exposed to a cat. In the hypothalamus, this exposure causes an increase in Fos-labeled cells in the anterior hypothalamic nucleus, the dorsomedial part of the ventromedial nucleus, and in the ventrolateral part of the premammillary nucleus (PMDvl). The premammillary nucleus has an important role in expression of defensive behaviors towards a predator, since lesions in this nucleus abolish defensive behaviors, like freezing and flight. The PMD does not modulate defensive behavior in other situations, as lesions of this nucleus had minimal effects on post-shock freezing scores. The PMD has important connections to the dorsal periaqueductal gray, an important structure in fear expression. In addition, animals display risk assessment behaviors to the environment previously associated with the cat. Fos-labeled cell analysis showed that the PMDvl is the most activated structure in the hypothalamus, and inactivation with muscimol prior to exposure to the context abolishes the defensive behavior. Therefore, the hypothalamus, mainly the PMDvl, has an important role in expression of innate and conditioned defensive behaviors to a predator.
Social defeat
Likewise, the hypothalamus has a role in social defeat: nuclei in medial zone are also mobilized during an encounter with an aggressive conspecific. The defeated animal has an increase in Fos levels in sexually dimorphic structures, such as the medial pre-optic nucleus, the ventrolateral part of ventromedial nucleus, and the ventral premammilary nucleus. Such structures are important in other social behaviors, such as sexual and aggressive behaviors. Moreover, the premammillary nucleus also is mobilized, the dorsomedial part but not the ventrolateral part. Lesions in this nucleus abolish passive defensive behavior, like freezing and the "on-the-back" posture.
Learning arbitrator
Recent research has questioned whether the lateral hypothalamus's role is only restricted to initiating and stopping innate behaviors and argued it learns about food-related cues. Specifically, that it opposes learning about information what is neutral or distant to food. According this view, the lateral hypothalamus is "a unique arbitrator of learning capable of shifting behavior toward or away from important events".
Additional images
| Biology and health sciences | Endocrine system | Biology |
58686 | https://en.wikipedia.org/wiki/Cerebral%20cortex | Cerebral cortex | The cerebral cortex, also known as the cerebral mantle, is the outer layer of neural tissue of the cerebrum of the brain in humans and other mammals. It is the largest site of neural integration in the central nervous system, and plays a key role in attention, perception, awareness, thought, memory, language, and consciousness. The cerebral cortex is the part of the brain responsible for cognition.
The six-layered neocortex makes up approximately 90% of the cortex, with the allocortex making up the remainder. The cortex is divided into left and right parts by the longitudinal fissure, which separates the two cerebral hemispheres that are joined beneath the cortex by the corpus callosum. In most mammals, apart from small mammals that have small brains, the cerebral cortex is folded, providing a greater surface area in the confined volume of the cranium. Apart from minimising brain and cranial volume, cortical folding is crucial for the brain circuitry and its functional organisation. In mammals with small brains, there is no folding and the cortex is smooth.
A fold or ridge in the cortex is termed a gyrus (plural gyri) and a groove is termed a sulcus (plural sulci). These surface convolutions appear during fetal development and continue to mature after birth through the process of gyrification. In the human brain, the majority of the cerebral cortex is not visible from the outside, but buried in the sulci. The major sulci and gyri mark the divisions of the cerebrum into the lobes of the brain. The four major lobes are the frontal, parietal, occipital and temporal lobes. Other lobes are the limbic lobe, and the insular cortex often referred to as the insular lobe.
There are between 14 and 16 billion neurons in the human cerebral cortex. These are organised into horizontal cortical layers, and radially into cortical columns and minicolumns. Cortical areas have specific functions such as movement in the motor cortex, and sight in the visual cortex. The motor cortex is primarily located in the precentral gyrus, and the visual cortex is located in the occipital lobe.
Structure
The cerebral cortex is the outer covering of the surfaces of the cerebral hemispheres and is folded into peaks called gyri, and grooves called sulci. In the human brain, it is between 2 and 3-4 mm. thick, and makes up 40% of the brain's mass. 90% of the cerebral cortex is the six-layered neocortex whilst the other 10% is made up of the three/four-layered allocortex. There are between 14 and 16 billion neurons in the cortex. These cortical neurons are organized radially in cortical columns, and minicolumns, in the horizontally organized layers of the cortex.
The neocortex is separable into different regions of cortex known in the plural as cortices, and include the motor cortex and visual cortex. About two thirds of the cortical surface is buried in the sulci and the insular cortex is completely hidden. The cortex is thickest over the top of a gyrus and thinnest at the bottom of a sulcus.
Folds
The cerebral cortex is folded in a way that allows a large surface area of neural tissue to fit within the confines of the neurocranium. When unfolded in the human, each hemispheric cortex has a total surface area of about . The folding is inward away from the surface of the brain, and is also present on the medial surface of each hemisphere within the longitudinal fissure. Most mammals have a cerebral cortex that is convoluted with the peaks known as gyri and the troughs or grooves known as sulci. Some small mammals including some small rodents have smooth cerebral surfaces without gyrification.
Lobes
The larger sulci and gyri mark the divisions of the cortex of the cerebrum into the lobes of the brain. There are four main lobes: the frontal lobe, parietal lobe, temporal lobe, and occipital lobe. The insular cortex is often included as the insular lobe. The limbic lobe is a rim of cortex on the medial side of each hemisphere and is also often included. There are also three lobules of the brain described: the paracentral lobule, the superior parietal lobule, and the inferior parietal lobule.
Thickness
For species of mammals, larger brains (in absolute terms, not just in relation to body size) tend to have thicker cortices. The smallest mammals, such as shrews, have a neocortical thickness of about 0.5 mm; the ones with the largest brains, such as humans and fin whales, have thicknesses of 2–4 mm. There is an approximately logarithmic relationship between brain weight and cortical thickness.
Magnetic resonance imaging of the brain (MRI) makes it possible to get a measure for the thickness of the human cerebral cortex and relate it to other measures. The thickness of different cortical areas varies but in general, sensory cortex is thinner than motor cortex. One study has found some positive association between the cortical thickness and intelligence.
Another study has found that the somatosensory cortex is thicker in migraine patients, though it is not known if this is the result of migraine attacks, the cause of them or if both are the result of a shared cause.
A later study using a larger patient population reports no change in the cortical thickness in patients with migraine.
A genetic disorder of the cerebral cortex, whereby decreased folding in certain areas results in a microgyrus, where there are four layers instead of six, is in some instances seen to be related to dyslexia.
Layers of neocortex
The neocortex is formed of six layers, numbered I to VI, from the outermost layer I – near to the pia mater, to the innermost layer VI – near to the underlying white matter. Each cortical layer has a characteristic distribution of different neurons and their connections with other cortical and subcortical regions. There are direct connections between different cortical areas and indirect connections via the thalamus.
One of the clearest examples of cortical layering is the line of Gennari in the primary visual cortex. This is a band of whiter tissue that can be observed with the naked eye in the calcarine sulcus of the occipital lobe. The line of Gennari is composed of axons bringing visual information from the thalamus into layer IV of the visual cortex.
Staining cross-sections of the cortex to reveal the position of neuronal cell bodies and the intracortical axon tracts allowed neuroanatomists in the early 20th century to produce a detailed description of the laminar structure of the cortex in different species. The work of Korbinian Brodmann (1909) established that the mammalian neocortex is consistently divided into six layers.
Layer I
Layer I is the molecular layer, and contains few scattered neurons, including GABAergic rosehip neurons. Layer I consists largely of extensions of apical dendritic tufts of pyramidal neurons and horizontally oriented axons, as well as glial cells. During development, Cajal–Retzius cells and subpial granular layer cells are present in this layer. Also, some spiny stellate cells can be found here. Inputs to the apical tufts are thought to be crucial for the feedback interactions in the cerebral cortex involved in associative learning and attention.
While it was once thought that the input to layer I came from the cortex itself, it is now known that layer I across the cerebral cortex receives substantial input from matrix or M-type thalamus cells, as opposed to core or C-type that go to layer IV.
It is thought that layer I serves as a central hub for collecting and processing widespread information. It integrates ascending sensory inputs with top-down expectations, regulating how sensory perceptions align with anticipated outcomes. Further, layer I sorts, directs, and combines excitatory inputs, integrating them with neuromodulatory signals. Inhibitory interneurons, both within layer I and from other cortical layers, gate these signals. Together, these interactions dynamically calibrate information flow throughout the neocortex, shaping perceptions and experiences.
Layer II
Layer II, the external granular layer, contains small pyramidal neurons and numerous stellate neurons.
Layer III
Layer III, the external pyramidal layer, contains predominantly small and medium-size pyramidal neurons, as well as non-pyramidal neurons with vertically oriented intracortical axons; layers I through III are the main target of commissural corticocortical afferents, and layer III is the principal source of corticocortical efferents.
Layer IV
Layer IV, the internal granular layer, contains different types of stellate and pyramidal cells, and is the main target of thalamocortical afferents from thalamus type C neurons (core-type) as well as intra-hemispheric corticocortical afferents. The layers above layer IV are also referred to as supragranular layers (layers I-III), whereas the layers below are referred to as infragranular layers (layers V and VI). African elephants, cetaceans, and hippopotamus do not have a layer IV with axons which would terminate there going instead to the inner part of layer III.
Layer V
Layer V, the internal pyramidal layer, contains large pyramidal neurons. Axons from these leave the cortex and connect with subcortical structures including the basal ganglia. In the primary motor cortex of the frontal lobe, layer V contains giant pyramidal cells called Betz cells, whose axons travel through the internal capsule, the brain stem, and the spinal cord forming the corticospinal tract, which is the main pathway for voluntary motor control.
Layer VI
Layer VI, the polymorphic layer or multiform layer, contains few large pyramidal neurons and many small spindle-like pyramidal and multiform neurons; layer VI sends efferent fibers to the thalamus, establishing a very precise reciprocal interconnection between the cortex and the thalamus. That is, layer VI neurons from one cortical column connect with thalamus neurons that provide input to the same cortical column. These connections are both excitatory and inhibitory. Neurons send excitatory fibers to neurons in the thalamus and also send collaterals to the thalamic reticular nucleus that inhibit these same thalamus neurons or ones adjacent to them. One theory is that because the inhibitory output is reduced by cholinergic input to the cerebral cortex, this provides the brainstem with adjustable "gain control for the relay of lemniscal inputs".
Columns
The cortical layers are not simply stacked one over the other; there exist characteristic connections between different layers and neuronal types, which span all the thickness of the cortex. These cortical microcircuits are grouped into cortical columns and minicolumns. It has been proposed that the minicolumns are the basic functional units of the cortex. In 1957, Vernon Mountcastle showed that the functional properties of the cortex change abruptly between laterally adjacent points; however, they are continuous in the direction perpendicular to the surface. Later works have provided evidence of the presence of functionally distinct cortical columns in the visual cortex (Hubel and Wiesel, 1959), auditory cortex,
and associative cortex.
Cortical areas that lack a layer IV are called agranular. Cortical areas that have only a rudimentary layer IV are called dysgranular. Information processing within each layer is determined by different temporal dynamics with that in layers II/III having a slow 2 Hz oscillation while that in layer V has a fast 10–15 Hz oscillation.
Types of cortex
Based on the differences in laminar organization the cerebral cortex can be classified into two types, the large area of neocortex which has six cell layers, and the much smaller area of allocortex that has three or four layers:
The neocortex is also known as the isocortex or neopallium and is the part of the mature cerebral cortex with six distinct layers. Examples of neocortical areas include the granular primary motor cortex, and the striate primary visual cortex. The neocortex has two subtypes, the true isocortex and the proisocortex which is a transitional region between the isocortex and the regions of the periallocortex.
The allocortex is the part of the cerebral cortex with three or four layers, and has three subtypes, the paleocortex with three cortical laminae, the archicortex which has four or five, and a transitional area adjacent to the allocortex, the periallocortex. Examples of allocortex are the olfactory cortex and the hippocampus.
There is a transitional area between the neocortex and the allocortex called the paralimbic cortex, where layers 2, 3 and 4 are merged. This area incorporates the proisocortex of the neocortex and the periallocortex of the allocortex. In addition, the cerebral cortex may be classified into four lobes: the frontal lobe, temporal lobe, the parietal lobe, and the occipital lobe, named from their overlying bones of the skull.
Blood supply and drainage
Blood supply to the cerebral cortex is part of the cerebral circulation. Cerebral arteries supply the blood that perfuses the cerebrum. This arterial blood carries oxygen, glucose, and other nutrients to the cortex. Cerebral veins drain the deoxygenated blood, and metabolic wastes including carbon dioxide, back to the heart.
The main arteries supplying the cortex are the anterior cerebral artery, the middle cerebral artery, and the posterior cerebral artery. The anterior cerebral artery supplies the anterior portions of the brain, including most of the frontal lobe. The middle cerebral artery supplies the parietal lobes, temporal lobes, and parts of the occipital lobes. The middle cerebral artery splits into two branches to supply the left and right hemisphere, where they branch further. The posterior cerebral artery supplies the occipital lobes.
The circle of Willis is the main blood system that deals with blood supply in the cerebrum and cerebral cortex.
Development
The prenatal development of the cerebral cortex is a complex and finely tuned process called corticogenesis, influenced by the interplay between genes and the environment.
Neural tube
The cerebral cortex develops from the most anterior part, the forebrain region, of the neural tube. The neural plate folds and closes to form the neural tube. From the cavity inside the neural tube develops the ventricular system, and, from the neuroepithelial cells of its walls, the neurons and glia of the nervous system. The most anterior (front, or cranial) part of the neural plate, the prosencephalon, which is evident before neurulation begins, gives rise to the cerebral hemispheres and later cortex.
Cortical neuron development
Cortical neurons are generated within the ventricular zone, next to the ventricles. At first, this zone contains neural stem cells, that transition to radial glial cells–progenitor cells, which divide to produce glial cells and neurons.
Radial glia
The cerebral cortex is composed of a heterogenous population of cells that give rise to different cell types. The majority of these cells are derived from radial glia migration that form the different cell types of the neocortex and it is a period associated with an increase in neurogenesis. Similarly, the process of neurogenesis regulates lamination to form the different layers of the cortex. During this process there is an increase in the restriction of cell fate that begins with earlier progenitors giving rise to any cell type in the cortex and later progenitors giving rise only to neurons of superficial layers. This differential cell fate creates an inside-out topography in the cortex with younger neurons in superficial layers and older neurons in deeper layers. In addition, laminar neurons are stopped in S or G2 phase in order to give a fine distinction between the different cortical layers. Laminar differentiation is not fully complete until after birth since during development laminar neurons are still sensitive to extrinsic signals and environmental cues.
Although the majority of the cells that compose the cortex are derived locally from radial glia there is a subset population of neurons that migrate from other regions. Radial glia give rise to neurons that are pyramidal in shape and use glutamate as a neurotransmitter, however these migrating cells contribute neurons that are stellate-shaped and use GABA as their main neurotransmitter. These GABAergic neurons are generated by progenitor cells in the medial ganglionic eminence (MGE) that migrate tangentially to the cortex via the subventricular zone. This migration of GABAergic neurons is particularly important since GABA receptors are excitatory during development. This excitation is primarily driven by the flux of chloride ions through the GABA receptor, however in adults chloride concentrations shift causing an inward flux of chloride that hyperpolarizes postsynaptic neurons.
The glial fibers produced in the first divisions of the progenitor cells are radially oriented, spanning the thickness of the cortex from the ventricular zone to the outer, pial surface, and provide scaffolding for the migration of neurons outwards from the ventricular zone.
At birth there are very few dendrites present on the cortical neuron's cell body, and the axon is undeveloped. During the first year of life the dendrites become dramatically increased in number, such that they can accommodate up to a hundred thousand synaptic connections with other neurons. The axon can develop to extend a long way from the cell body.
Asymmetric division
The first divisions of the progenitor cells are symmetric, which duplicates the total number of progenitor cells at each mitotic cycle. Then, some progenitor cells begin to divide asymmetrically, producing one postmitotic cell that migrates along the radial glial fibers, leaving the ventricular zone, and one progenitor cell, which continues to divide until the end of development, when it differentiates into a glial cell or an ependymal cell. As the G1 phase of mitosis is elongated, in what is seen as selective cell-cycle lengthening, the newly born neurons migrate to more superficial layers of the cortex. The migrating daughter cells become the pyramidal cells of the cerebral cortex. The development process is time ordered and regulated by hundreds of genes and epigenetic regulatory mechanisms.
Layer organization
The layered structure of the mature cerebral cortex is formed during development. The first pyramidal neurons generated migrate out of the ventricular zone and subventricular zone, together with reelin-producing Cajal–Retzius neurons, from the preplate. Next, a cohort of neurons migrating into the middle of the preplate divides this transient layer into the superficial marginal zone, which will become layer I of the mature neocortex, and the subplate, forming a middle layer called the cortical plate. These cells will form the deep layers of the mature cortex, layers five and six. Later born neurons migrate radially into the cortical plate past the deep layer neurons, and become the upper layers (two to four). Thus, the layers of the cortex are created in an inside-out order. The only exception to this inside-out sequence of neurogenesis occurs in the layer I of primates, in which, in contrast to rodents, neurogenesis continues throughout the entire period of corticogenesis.
Cortical patterning
The map of functional cortical areas, which include primary motor and visual cortex, originates from a 'protomap', which is regulated by molecular signals such as fibroblast growth factor FGF8 early in embryonic development. These signals regulate the size, shape, and position of cortical areas on the surface of the cortical primordium, in part by regulating gradients of transcription factor expression, through a process called cortical patterning. Examples of such transcription factors include the genes EMX2 and PAX6. Together, both transcription factors form an opposing gradient of expression. Pax6 is highly expressed at the rostral lateral pole, while Emx2 is highly expressed in the caudomedial pole. The establishment of this gradient is important for proper development. For example, mutations in Pax6 can cause expression levels of Emx2 to expand out of its normal expression domain, which would ultimately lead to an expansion of the areas normally derived from the caudal medial cortex, such as the visual cortex. On the contrary, if mutations in Emx2 occur, it can cause the Pax6-expressing domain to expand and result in the frontal and motor cortical regions enlarging. Therefore, researchers believe that similar gradients and signaling centers next to the cortex could contribute to the regional expression of these transcription factors.
Two very well studied patterning signals for the cortex include FGF and retinoic acid. If FGFs are misexpressed in different areas of the developing cortex, cortical patterning is disrupted. Specifically, when Fgf8 is increased in the anterior pole, Emx2 is downregulated and a caudal shift in the cortical region occurs. This ultimately causes an expansion of the rostral regions. Therefore, Fgf8 and other FGFs play a role in the regulation of expression of Emx2 and Pax6 and represent how the cerebral cortex can become specialized for different functions.
Rapid expansion of the cortical surface area is regulated by the amount of self-renewal of radial glial cells and is partly regulated by FGF and Notch genes. During the period of cortical neurogenesis and layer formation, many higher mammals begin the process of gyrification, which generates the characteristic folds of the cerebral cortex. Gyrification is regulated by a DNA-associated protein Trnp1 and by FGF and SHH signaling.
Evolution
Of all the different brain regions, the cerebral cortex shows the largest evolutionary variation and has evolved most recently. In contrast to the highly conserved circuitry of the medulla oblongata, for example, which serves critical functions such as regulation of heart and respiration rates, many areas of the cerebral cortex are not strictly necessary for survival. Thus, the evolution of the cerebral cortex has seen the advent and modification of new functional areas—particularly association areas that do not directly receive input from outside the cortex.
A key theory of cortical evolution is embodied in the radial unit hypothesis and related protomap hypothesis, first proposed by Rakic. This theory states that new cortical areas are formed by the addition of new radial units, which is accomplished at the stem cell level. The protomap hypothesis states that the cellular and molecular identity and characteristics of neurons in each cortical area are specified by cortical stem cells, known as radial glial cells, in a primordial map. This map is controlled by secreted signaling proteins and downstream transcription factors.
Function
Connections
The cerebral cortex is connected to various subcortical structures such as the thalamus and the basal ganglia, sending information to them along efferent connections and receiving information from them via afferent connections. Most sensory information is routed to the cerebral cortex via the thalamus. Olfactory information, however, passes through the olfactory bulb to the olfactory cortex (piriform cortex). The majority of connections are from one area of the cortex to another, rather than from subcortical areas; Braitenberg and Schüz (1998) claim that in primary sensory areas, at the cortical level where the input fibers terminate, up to 20% of the synapses are supplied by extracortical afferents but that in other areas and other layers the percentage is likely to be much lower.
Cortical areas
The whole of the cerebral cortex was divided into 52 different areas in an early presentation by Korbinian Brodmann. These areas, known as Brodmann areas, are based on their cytoarchitecture but also relate to various functions. An example is Brodmann area 17, which is the primary visual cortex.
In more general terms the cortex is typically described as comprising three parts: sensory, motor, and association areas.
Sensory areas
The sensory areas are the cortical areas that receive and process information from the senses. Parts of the cortex that receive sensory inputs from the thalamus are called primary sensory areas. The senses of vision, hearing, and touch are served by the primary visual cortex, primary auditory cortex and primary somatosensory cortex respectively. In general, the two hemispheres receive information from the opposite (contralateral) side of the body. For example, the right primary somatosensory cortex receives information from the left limbs, and the right visual cortex receives information from the left visual field.
The organization of sensory maps in the cortex reflects that of the corresponding sensing organ, in what is known as a topographic map. Neighboring points in the primary visual cortex, for example, correspond to neighboring points in the retina. This topographic map is called a retinotopic map. In the same way, there exists a tonotopic map in the primary auditory cortex and a somatotopic map in the primary sensory cortex. This last topographic map of the body onto the posterior central gyrus has been illustrated as a deformed human representation, the somatosensory homunculus, where the size of different body parts reflects the relative density of their innervation. Areas with much sensory innervation, such as the fingertips and the lips, require more cortical area to process finer sensation.
Motor areas
The motor areas are located in both hemispheres of the cortex. The motor areas are very closely related to the control of voluntary movements, especially fine fragmented movements performed by the hand. The right half of the motor area controls the left side of the body, and vice versa.
Two areas of the cortex are commonly referred to as motor:
Primary motor cortex, which executes voluntary movements
Supplementary motor areas and premotor cortex, which select voluntary movements.
In addition, motor functions have been described for:
Posterior parietal cortex, which guides voluntary movements in space
Dorsolateral prefrontal cortex, which decides which voluntary movements to make according to higher-order instructions, rules, and self-generated thoughts.
Just underneath the cerebral cortex are interconnected subcortical masses of grey matter called basal ganglia (or nuclei). The basal ganglia receive input from the substantia nigra of the midbrain and motor areas of the cerebral cortex, and send signals back to both of these locations. They are involved in motor control. They are found lateral to the thalamus. The main components of the basal ganglia are the caudate nucleus, the putamen, the globus pallidus, the substantia nigra, the nucleus accumbens, and the subthalamic nucleus. The putamen and globus pallidus are also collectively known as the lentiform nucleus, because together they form a lens-shaped body. The putamen and caudate nucleus are also collectively called the corpus striatum after their striped appearance.
Association areas
The association areas are the parts of the cerebral cortex that do not belong to the primary regions. They function to produce a meaningful perceptual experience of the world, enable us to interact effectively, and support abstract thinking and language. The parietal, temporal, and occipital lobes – all located in the posterior part of the cortex – integrate sensory information and information stored in memory. The frontal lobe or prefrontal association complex is involved in planning actions and movement, as well as abstract thought. Globally, the association areas are organized as distributed networks. Each network connects areas distributed across widely spaced regions of the cortex. Distinct networks are positioned adjacent to one another yielding a complex series of interwoven networks. The specific organization of the association networks is debated with evidence for interactions, hierarchical relationships, and competition between networks.
In humans, association networks are particularly important to language function. In the past it was theorized that language abilities are localized in Broca's area in areas of the left inferior frontal gyrus, BA44 and BA45, for language expression and in Wernicke's area BA22, for language reception. However, the processes of language expression and reception have been shown to occur in areas other than just those structures around the lateral sulcus, including the frontal lobe, basal ganglia, cerebellum, and pons.
Clinical significance
Neurodegenerative diseases such as Alzheimer's disease, show as a marker, an atrophy of the grey matter of the cerebral cortex.
Other diseases of the central nervous system include neurological disorders such as epilepsy, movement disorders, and different types of aphasia (difficulties in speech expression or comprehension).
Brain damage from disease or trauma, can involve damage to a specific lobe such as in frontal lobe disorder, and associated functions will be affected. The blood–brain barrier that serves to protect the brain from infection can become compromised allowing entry to pathogens.
The developing fetus is susceptible to a range of environmental factors that can cause birth defects and problems in later development. Maternal alcohol consumption for example can cause fetal alcohol spectrum disorder. Other factors that can cause neurodevelopment disorders are toxicants such as drugs, and exposure to radiation as from X-rays. Infections can also affect the development of the cortex. A viral infection is one of the causes of lissencephaly, which results in a smooth cortex without gyrification.
A type of electrocorticography called cortical stimulation mapping is an invasive procedure that involves placing electrodes directly onto the exposed brain in order to localise the functions of specific areas of the cortex. It is used in clinical and therapeutic applications including pre-surgical mapping.
Genes associated with cortical disorders
There are a number of genetic mutations that can cause a wide range of genetic disorders of the cerebral cortex, including microcephaly, schizencephaly and types of lissencephaly. Chromosome abnormalities can also result causing a number of neurodevelopmental disorders such as fragile X syndrome and Rett syndrome.
MCPH1 codes for microcephalin, and disorders in this and in ASPM are associated with microcephaly. Mutations in the gene NBS1 that codes for nibrin can cause Nijmegen breakage syndrome, characterised by microcephaly.
Mutations in EMX2, and COL4A1 are associated with schizencephaly, a condition marked by the absence of large parts of the cerebral hemispheres.
History
In 1909, Korbinian Brodmann distinguished 52 different regions of the cerebral cortex based on their cytoarchitecture. These are known as Brodmann areas.
Rafael Lorente de Nó, a student of Santiago Ramon y Cajal, identified more than 40 different types of cortical neurons based on the distribution of their dendrites and axons.
Other animals
The cerebral cortex is derived from the pallium, a layered structure found in the forebrain of all vertebrates. The basic form of the pallium is a cylindrical layer enclosing fluid-filled ventricles. Around the circumference of the cylinder are four zones, the dorsal pallium, medial pallium, ventral pallium, and lateral pallium, which are thought to be homologous to the neocortex, hippocampus, amygdala, and olfactory cortex, respectively.
In avian brains, evidence suggests the avian pallium's neuroarchitecture to be reminiscent of the mammalian cerebral cortex. The avian pallium has also been suggested to be an equivalent neural basis for consciousness.
Until recently no counterpart to the cerebral cortex had been recognized in invertebrates. However, a study published in the journal Cell in 2010, based on gene expression profiles, reported strong affinities between the cerebral cortex and the mushroom bodies of the ragworm Platynereis dumerilii. Mushroom bodies are structures in the brains of many types of worms and arthropods that are known to play important roles in learning and memory; the genetic evidence indicates a common evolutionary origin, and therefore indicates that the origins of the earliest precursors of the cerebral cortex date back to the Precambrian era.
Additional images
| Biology and health sciences | Nervous system | null |
58690 | https://en.wikipedia.org/wiki/Crystal%20structure | Crystal structure | In crystallography, crystal structure is a description of ordered arrangement of atoms, ions, or molecules in a crystalline material. Ordered structures occur from intrinsic nature of constituent particles to form symmetric patterns that repeat along the principal directions of three-dimensional space in matter.
The smallest group of particles in material that constitutes this repeating pattern is unit cell of the structure. The unit cell completely reflects symmetry and structure of the entire crystal, which is built up by repetitive translation of unit cell along its principal axes. The translation vectors define the nodes of Bravais lattice.
The lengths of principal axes/edges, of unit cell and angles between them are lattice constants, also called lattice parameters or cell parameters. The symmetry properties of crystal are described by the concept of space groups. All possible symmetric arrangements of particles in three-dimensional space may be described by 230 space groups.
The crystal structure and symmetry play a critical role in determining many physical properties, such as cleavage, electronic band structure, and optical transparency.
Unit cell
Crystal structure is described in terms of the geometry of arrangement of particles in the unit cells. The unit cell is defined as the smallest repeating unit having the full symmetry of the crystal structure. The geometry of the unit cell is defined as a parallelepiped, providing six lattice parameters taken as the lengths of the cell edges (a, b, c) and the angles between them (α, β, γ). The positions of particles inside the unit cell are described by the fractional coordinates (xi, yi, zi) along the cell edges, measured from a reference point. It is thus only necessary to report the coordinates of a smallest asymmetric subset of particles, called the crystallographic asymmetric unit. The asymmetric unit may be chosen so that it occupies the smallest physical space, which means that not all particles need to be physically located inside the boundaries given by the lattice parameters. All other particles of the unit cell are generated by the symmetry operations that characterize the symmetry of the unit cell. The collection of symmetry operations of the unit cell is expressed formally as the space group of the crystal structure.
Miller indices
Vectors and planes in a crystal lattice are described by the three-value Miller index notation. This syntax uses the indices h, k, and ℓ as directional parameters.
By definition, the syntax (hkℓ) denotes a plane that intercepts the three points a1/h, a2/k, and a3/ℓ, or some multiple thereof. That is, the Miller indices are proportional to the inverses of the intercepts of the plane with the unit cell (in the basis of the lattice vectors). If one or more of the indices is zero, it means that the planes do not intersect that axis (i.e., the intercept is "at infinity"). A plane containing a coordinate axis is translated so that it no longer contains that axis before its Miller indices are determined. The Miller indices for a plane are integers with no common factors. Negative indices are indicated with horizontal bars, as in (13). In an orthogonal coordinate system for a cubic cell, the Miller indices of a plane are the Cartesian components of a vector normal to the plane.
Considering only (hkℓ) planes intersecting one or more lattice points (the lattice planes), the distance d between adjacent lattice planes is related to the (shortest) reciprocal lattice vector orthogonal to the planes by the formula
Planes and directions
The crystallographic directions are geometric lines linking nodes (atoms, ions or molecules) of a crystal. Likewise, the crystallographic planes are geometric planes linking nodes. Some directions and planes have a higher density of nodes. These high density planes have an influence on the behavior of the crystal as follows:
Optical properties: Refractive index is directly related to density (or periodic density fluctuations).
Absorption and reactivity: Physical adsorption and chemical reactions occur at or near surface atoms or molecules. These phenomena are thus sensitive to the density of nodes.
Surface tension: The condensation of a material means that the atoms, ions or molecules are more stable if they are surrounded by other similar species. The surface tension of an interface thus varies according to the density on the surface.
Microstructural defects: Pores and crystallites tend to have straight grain boundaries following higher density planes.
Cleavage: This typically occurs preferentially parallel to higher density planes.
Plastic deformation: Dislocation glide occurs preferentially parallel to higher density planes. The perturbation carried by the dislocation (Burgers vector) is along a dense direction. The shift of one node in a more dense direction requires a lesser distortion of the crystal lattice.
Some directions and planes are defined by symmetry of the crystal system. In monoclinic, trigonal, tetragonal, and hexagonal systems there is one unique axis (sometimes called the principal axis) which has higher rotational symmetry than the other two axes. The basal plane is the plane perpendicular to the principal axis in these crystal systems. For triclinic, orthorhombic, and cubic crystal systems the axis designation is arbitrary and there is no principal axis.
Cubic structures
For the special case of simple cubic crystals, the lattice vectors are orthogonal and of equal length (usually denoted a); similarly for the reciprocal lattice. So, in this common case, the Miller indices (ℓmn) and [ℓmn] both simply denote normals/directions in Cartesian coordinates. For cubic crystals with lattice constant a, the spacing d between adjacent (ℓmn) lattice planes is (from above):
Because of the symmetry of cubic crystals, it is possible to change the place and sign of the integers and have equivalent directions and planes:
Coordinates in angle brackets such as denote a family of directions that are equivalent due to symmetry operations, such as [100], [010], [001] or the negative of any of those directions.
Coordinates in curly brackets or braces such as {100} denote a family of plane normals that are equivalent due to symmetry operations, much the way angle brackets denote a family of directions.
For face-centered cubic (fcc) and body-centered cubic (bcc) lattices, the primitive lattice vectors are not orthogonal. However, in these cases the Miller indices are conventionally defined relative to the lattice vectors of the cubic supercell and hence are again simply the Cartesian directions.
Interplanar spacing
The spacing d between adjacent (hkℓ) lattice planes is given by:
Cubic:
Tetragonal:
Hexagonal:
Rhombohedral (primitive setting):
Orthorhombic:
Monoclinic:
Triclinic:
Classification by symmetry
The defining property of a crystal is its inherent symmetry. Performing certain symmetry operations on the crystal lattice leaves it unchanged. All crystals have translational symmetry in three directions, but some have other symmetry elements as well. For example, rotating the crystal 180° about a certain axis may result in an atomic configuration that is identical to the original configuration; the crystal has twofold rotational symmetry about this axis. In addition to rotational symmetry, a crystal may have symmetry in the form of mirror planes, and also the so-called compound symmetries, which are a combination of translation and rotation or mirror symmetries. A full classification of a crystal is achieved when all inherent symmetries of the crystal are identified.
Lattice systems
Lattice systems are a grouping of crystal structures according to the point groups of their lattice. All crystals fall into one of seven lattice systems. They are related to, but not the same as the seven crystal systems.
The most symmetric, the cubic or isometric system, has the symmetry of a cube, that is, it exhibits four threefold rotational axes oriented at 109.5° (the tetrahedral angle) with respect to each other. These threefold axes lie along the body diagonals of the cube. The other six lattice systems, are hexagonal, tetragonal, rhombohedral (often confused with the trigonal crystal system), orthorhombic, monoclinic and triclinic.
Bravais lattices
Bravais lattices, also referred to as space lattices, describe the geometric arrangement of the lattice points, and therefore the translational symmetry of the crystal. The three dimensions of space afford 14 distinct Bravais lattices describing the translational symmetry. All crystalline materials recognized today, not including quasicrystals, fit in one of these arrangements. The fourteen three-dimensional lattices, classified by lattice system, are shown above.
The crystal structure consists of the same group of atoms, the basis, positioned around each and every lattice point. This group of atoms therefore repeats indefinitely in three dimensions according to the arrangement of one of the Bravais lattices. The characteristic rotation and mirror symmetries of the unit cell is described by its crystallographic point group.
Crystal systems
A crystal system is a set of point groups in which the point groups themselves and their corresponding space groups are assigned to a lattice system. Of the 32 point groups that exist in three dimensions, most are assigned to only one lattice system, in which case the crystal system and lattice system both have the same name. However, five point groups are assigned to two lattice systems, rhombohedral and hexagonal, because both lattice systems exhibit threefold rotational symmetry. These point groups are assigned to the trigonal crystal system.
In total there are seven crystal systems: triclinic, monoclinic, orthorhombic, tetragonal, trigonal, hexagonal, and cubic.
Point groups
The crystallographic point group or crystal class is the mathematical group comprising the symmetry operations that leave at least one point unmoved and that leave the appearance of the crystal structure unchanged. These symmetry operations include
Reflection, which reflects the structure across a reflection plane
Rotation, which rotates the structure a specified portion of a circle about a rotation axis
Inversion, which changes the sign of the coordinate of each point with respect to a center of symmetry or inversion point
Improper rotation, which consists of a rotation about an axis followed by an inversion.
Rotation axes (proper and improper), reflection planes, and centers of symmetry are collectively called symmetry elements. There are 32 possible crystal classes. Each one can be classified into one of the seven crystal systems.
Space groups
In addition to the operations of the point group, the space group of the crystal structure contains translational symmetry operations. These include:
Pure translations, which move a point along a vector
Screw axes, which rotate a point around an axis while translating parallel to the axis.
Glide planes, which reflect a point through a plane while translating it parallel to the plane.
There are 230 distinct space groups.
Atomic coordination
By considering the arrangement of atoms relative to each other, their coordination numbers, interatomic distances, types of bonding, etc., it is possible to form a general view of the structures and alternative ways of visualizing them.
Close packing
The principles involved can be understood by considering the most efficient way of packing together equal-sized spheres and stacking close-packed atomic planes in three dimensions. For example, if plane A lies beneath plane B, there are two possible ways of placing an additional atom on top of layer B. If an additional layer were placed directly over plane A, this would give rise to the following series:
...ABABABAB...
This arrangement of atoms in a crystal structure is known as hexagonal close packing (hcp).
If, however, all three planes are staggered relative to each other and it is not until the fourth layer is positioned directly over plane A that the sequence is repeated, then the following sequence arises:
...ABCABCABC...
This type of structural arrangement is known as cubic close packing (ccp).
The unit cell of a ccp arrangement of atoms is the face-centered cubic (fcc) unit cell. This is not immediately obvious as the closely packed layers are parallel to the {111} planes of the fcc unit cell. There are four different orientations of the close-packed layers.
APF and CN
One important characteristic of a crystalline structure is its atomic packing factor (APF). This is calculated by assuming that all the atoms are identical spheres, with a radius large enough that each sphere abuts on the next. The atomic packing factor is the proportion of space filled by these spheres which can be worked out by calculating the total volume of the spheres and dividing by the volume of the cell as follows:
Another important characteristic of a crystalline structure is its coordination number (CN). This is the number of nearest neighbours of a central atom in the structure.
The APFs and CNs of the most common crystal structures are shown below:
The 74% packing efficiency of the FCC and HCP is the maximum density possible in unit cells constructed of spheres of only one size.
Interstitial sites
Interstitial sites refer to the empty spaces in between the atoms in the crystal lattice. These spaces can be filled by oppositely charged ions to form multi-element structures. They can also be filled by impurity atoms or self-interstitials to form interstitial defects.
Defects and impurities
Real crystals feature defects or irregularities in the ideal arrangements described above and it is these defects that critically determine many of the electrical and mechanical properties of real materials.
Impurities
When one atom substitutes for one of the principal atomic components within the crystal structure, alteration in the electrical and thermal properties of the material may ensue. Impurities may also manifest as electron spin impurities in certain materials. Research on magnetic impurities demonstrates that substantial alteration of certain properties such as specific heat may be affected by small concentrations of an impurity, as for example impurities in semiconducting ferromagnetic alloys may lead to different properties as first predicted in the late 1960s.
Dislocations
Dislocations in a crystal lattice are line defects that are associated with local stress fields. Dislocations allow shear at lower stress than that needed for a perfect crystal structure. The local stress fields result in interactions between the dislocations which then result in strain hardening or cold working.
Grain boundaries
Grain boundaries are interfaces where crystals of different orientations meet. A grain boundary is a single-phase interface, with crystals on each side of the boundary being identical except in orientation. The term "crystallite boundary" is sometimes, though rarely, used. Grain boundary areas contain those atoms that have been perturbed from their original lattice sites, dislocations, and impurities that have migrated to the lower energy grain boundary.
Treating a grain boundary geometrically as an interface of a single crystal cut into two parts, one of which is rotated, we see that there are five variables required to define a grain boundary. The first two numbers come from the unit vector that specifies a rotation axis. The third number designates the angle of rotation of the grain. The final two numbers specify the plane of the grain boundary (or a unit vector that is normal to this plane).
Grain boundaries disrupt the motion of dislocations through a material, so reducing crystallite size is a common way to improve strength, as described by the Hall–Petch relationship. Since grain boundaries are defects in the crystal structure they tend to decrease the electrical and thermal conductivity of the material. The high interfacial energy and relatively weak bonding in most grain boundaries often makes them preferred sites for the onset of corrosion and for the precipitation of new phases from the solid. They are also important to many of the mechanisms of creep.
Grain boundaries are in general only a few nanometers wide. In common materials, crystallites are large enough that grain boundaries account for a small fraction of the material. However, very small grain sizes are achievable. In nanocrystalline solids, grain boundaries become a significant volume fraction of the material, with profound effects on such properties as diffusion and plasticity. In the limit of small crystallites, as the volume fraction of grain boundaries approaches 100%, the material ceases to have any crystalline character, and thus becomes an amorphous solid.
Prediction of structure
The difficulty of predicting stable crystal structures based on the knowledge of only the chemical composition has long been a stumbling block on the way to fully computational materials design. Now, with more powerful algorithms and high-performance computing, structures of medium complexity can be predicted using such approaches as evolutionary algorithms, random sampling, or metadynamics.
The crystal structures of simple ionic solids (e.g., NaCl or table salt) have long been rationalized in terms of Pauling's rules, first set out in 1929 by Linus Pauling, referred to by many since as the "father of the chemical bond". Pauling also considered the nature of the interatomic forces in metals, and concluded that about half of the five d-orbitals in the transition metals are involved in bonding, with the remaining nonbonding d-orbitals being responsible for the magnetic properties. Pauling was therefore able to correlate the number of d-orbitals in bond formation with the bond length, as well as with many of the physical properties of the substance. He subsequently introduced the metallic orbital, an extra orbital necessary to permit uninhibited resonance of valence bonds among various electronic structures.
In the resonating valence bond theory, the factors that determine the choice of one from among alternative crystal structures of a metal or intermetallic compound revolve around the energy of resonance of bonds among interatomic positions. It is clear that some modes of resonance would make larger contributions (be more mechanically stable than others), and that in particular a simple ratio of number of bonds to number of positions would be exceptional. The resulting principle is that a special stability is associated with the simplest ratios or "bond numbers": , , , , , etc. The choice of structure and the value of the axial ratio (which determines the relative bond lengths) are thus a result of the effort of an atom to use its valency in the formation of stable bonds with simple fractional bond numbers.
After postulating a direct correlation between electron concentration and crystal structure in beta-phase alloys, Hume-Rothery analyzed the trends in melting points, compressibilities and bond lengths as a function of group number in the periodic table in order to establish a system of valencies of the transition elements in the metallic state. This treatment thus emphasized the increasing bond strength as a function of group number. The operation of directional forces were emphasized in one article on the relation between bond hybrids and the metallic structures. The resulting correlation between electronic and crystalline structures is summarized by a single parameter, the weight of the d-electrons per hybridized metallic orbital. The "d-weight" calculates out to 0.5, 0.7 and 0.9 for the fcc, hcp and bcc structures respectively. The relationship between d-electrons and crystal structure thus becomes apparent.
In crystal structure predictions/simulations, the periodicity is usually applied, since the system is imagined as being unlimited in all directions. Starting from a triclinic structure with no further symmetry property assumed, the system may be driven to show some additional symmetry properties by applying Newton's Second Law on particles in the unit cell and a recently developed dynamical equation for the system period vectors
(lattice parameters including angles), even if the system is subject to external stress.
Polymorphism
Polymorphism is the occurrence of multiple crystalline forms of a material. It is found in many crystalline materials including polymers, minerals, and metals. According to Gibbs' rules of phase equilibria, these unique crystalline phases are dependent on intensive variables such as pressure and temperature. Polymorphism is related to allotropy, which refers to elemental solids. The complete morphology of a material is described by polymorphism and other variables such as crystal habit, amorphous fraction or crystallographic defects. Polymorphs have different stabilities and may spontaneously and irreversibly transform from a metastable form (or thermodynamically unstable form) to the stable form at a particular temperature. They also exhibit different melting points, solubilities, and X-ray diffraction patterns.
One good example of this is the quartz form of silicon dioxide, or SiO2. In the vast majority of silicates, the Si atom shows tetrahedral coordination by 4 oxygens. All but one of the crystalline forms involve tetrahedral {SiO4} units linked together by shared vertices in different arrangements. In different minerals the tetrahedra show different degrees of networking and polymerization. For example, they occur singly, joined in pairs, in larger finite clusters including rings, in chains, double chains, sheets, and three-dimensional frameworks. The minerals are classified into groups based on these structures. In each of the 7 thermodynamically stable crystalline forms or polymorphs of crystalline quartz, only 2 out of 4 of each the edges of the {SiO4} tetrahedra are shared with others, yielding the net chemical formula for silica: SiO2.
Another example is elemental tin (Sn), which is malleable near ambient temperatures but is brittle when cooled. This change in mechanical properties due to existence of its two major allotropes, α- and β-tin. The two allotropes that are encountered at normal pressure and temperature, α-tin and β-tin, are more commonly known as gray tin and white tin respectively. Two more allotropes, γ and σ, exist at temperatures above 161 °C and pressures above several GPa. White tin is metallic, and is the stable crystalline form at or above room temperature. Below 13.2 °C, tin exists in the gray form, which has a diamond cubic crystal structure, similar to diamond, silicon or germanium. Gray tin has no metallic properties at all, is a dull gray powdery material, and has few uses, other than a few specialized semiconductor applications. Although the α–β transformation temperature of tin is nominally 13.2 °C, impurities (e.g. Al, Zn, etc.) lower the transition temperature well below 0 °C, and upon addition of Sb or Bi the transformation may not occur at all.
Physical properties
Twenty of the 32 crystal classes are piezoelectric, and crystals belonging to one of these classes (point groups) display piezoelectricity. All piezoelectric classes lack inversion symmetry. Any material develops a dielectric polarization when an electric field is applied, but a substance that has such a natural charge separation even in the absence of a field is called a polar material. Whether or not a material is polar is determined solely by its crystal structure. Only 10 of the 32 point groups are polar. All polar crystals are pyroelectric, so the 10 polar crystal classes are sometimes referred to as the pyroelectric classes.
There are a few crystal structures, notably the perovskite structure, which exhibit ferroelectric behavior. This is analogous to ferromagnetism, in that, in the absence of an electric field during production, the ferroelectric crystal does not exhibit a polarization. Upon the application of an electric field of sufficient magnitude, the crystal becomes permanently polarized. This polarization can be reversed by a sufficiently large counter-charge, in the same way that a ferromagnet can be reversed. However, although they are called ferroelectrics, the effect is due to the crystal structure (not the presence of a ferrous metal).
Some examples of crystal structures
| Physical sciences | Crystallography | Physics |
58708 | https://en.wikipedia.org/wiki/Water%20buffalo | Water buffalo | The water buffalo (Bubalus bubalis), also called domestic water buffalo, Asian water buffalo and Asiatic water buffalo, is a large bovid originating in the Indian subcontinent and Southeast Asia. Today, it is also kept in Italy, the Balkans, Australia, North America, South America and some African countries. Two extant types of water buffalo are recognized, based on morphological and behavioural criteria: the river buffalo of the Indian subcontinent and further west to the Balkans, Egypt and Italy; and the swamp buffalo from Assam in the west through Southeast Asia to the Yangtze Valley of China in the east.
The wild water buffalo (Bubalus arnee) most probably represents the ancestor of the domestic water buffalo. Results of a phylogenetic study indicate that the river-type water buffalo probably originated in western India and was domesticated about 6,300 years ago, whereas the swamp-type originated independently from Mainland Southeast Asia and was domesticated about 3,000 to 7,000 years ago. The river buffalo dispersed west as far as Egypt, the Balkans, and Italy; while swamp buffalo dispersed to the rest of Southeast Asia and up to the Yangtze Valley.
Water buffaloes were traded from the Indus Valley Civilisation to Mesopotamia, in modern Iraq, in 2500 BC by the Meluhhas. The seal of a scribe employed by an Akkadian king shows the sacrifice of water buffaloes.
Water buffaloes are especially suitable for tilling rice fields, and their milk is richer in fat and protein than that of dairy cattle. A large feral population became established in northern Australia in the late 19th century, and there are smaller feral herds in Papua New Guinea, Tunisia and northeastern Argentina. Feral herds are also present in New Britain, New Ireland, Irian Jaya, Colombia, Guyana, Suriname, Brazil, and Uruguay.
Taxonomy
Carl Linnaeus first described the genus Bos and the water buffalo under the binomial Bos bubalis in 1758; the species was known to occur in Asia and was held as a domestic form in Italy. Ellerman and Morrison-Scott treated the wild and domestic forms of the water buffalo as conspecifics, whereas others treated them as different species. The nomenclatorial treatment of the wild and domestic forms has been inconsistent and varies between authors and even within the works of single authors.
In March 2003, the International Commission on Zoological Nomenclature achieved consistency in the naming of the wild and domestic water buffaloes by ruling that the scientific name Bubalus arnee is valid for the wild form. B. bubalis continues to be valid for the domestic form and applies also to feral populations.
In the early 1970s, different names were proposed for the river and swamp types of water buffalos; the river type was referred to as Bubalus bubalis bubalis , while the swamp type was referred to as Bubalus bubalis carabanensis . However, Bubalus carabanensis is considered a junior synonym of Bubalus kerabau .
Characteristics
The skin of the river buffalo is black, but some specimens may have dark, slate-coloured skin. Swamp buffaloes have a grey skin at birth, which becomes slate blue later. Albinoids are present in some populations. River buffaloes have longer faces, smaller girths, and bigger limbs than swamp buffaloes. Their dorsal ridges extend further back and taper off more gradually. Their horns grow downward and backward, then curve upward in a spiral. Swamp buffaloes are heavy-bodied and stockily built, with a short body and large belly. The forehead is flat, the eyes are prominent, the face is short, and the muzzle is wide. The neck is comparatively long, and the withers and croup are prominent. A dorsal ridge extends backward and ends abruptly just before the end of the chest. Their horns grow outward and curve in a semicircle, but always remain more or less on the plane of the forehead. The tail is short, reaching only to the hocks. Size of the body and shape of horns may vary greatly among breeds. Height at the withers is for bulls and for cows, but large individuals may attain . Head-lump length at maturity typically ranges from with a long tail. They range in weight from , but weights of over have also been observed.
Tedong bonga is a piebald water buffalo featuring a unique black and white colouration that is favoured by the Toraja of Sulawesi.
The swamp buffalo has 48 chromosomes, while the river buffalo has 50 chromosomes. The two types do not readily interbreed, but fertile offspring can occur. Water buffalo-cattle hybrids have not been observed to occur, but the embryos of such hybrids reach maturity in laboratory experiments, albeit at lower rates than non-hybrids.
The rumen of the water buffalo differs from the rumen of other ruminants. It contains a larger population of bacteria, particularly the cellulolytic bacteria, lower protozoa, and higher fungi zoospores. In addition, higher levels of the rumen ammonia nitrogen (NH4-N) and pH have been found compared to those in cattle.
Ecology and behavior
River buffaloes prefer deep water. Swamp buffaloes prefer to wallow in mudholes, which they make with their horns. During wallowing, they acquire a thick coating of mud. Both are well-adapted to a hot and humid climate with temperatures ranging from in the winter to and greater in the summer. Water availability is important in hot climates, since they need wallows, rivers, or splashing water to assist in thermoregulation. Some water buffalo breeds are adapted to saline seaside shores and saline sandy terrain.
Diet
Water buffaloes thrive on many aquatic plants. During floods, they graze submerged, raising their heads above the water and carrying quantities of edible plants. Water buffaloes eat reeds, Arundo donax, a kind of Cyperaceae, Eichhornia crassipes, and Juncaceae. Some of these plants are of great value to local peoples. Others, such as E. crassipes and A. donax, are a major problem in some tropical valleys and by eating them, the water buffaloes may help control these invasive plants.
Green fodders are widely used for intensive milk production and for fattening. Many fodder crops are conserved as hay, chaffed, or pulped. Fodders include alfalfa, the leaves, stems or trimmings of banana, cassava, Mangelwurzel, esparto, Leucaena leucocephala and kenaf, maize, oats, Pandanus, peanut, sorghum, soybean, sugarcane, bagasse, and turnips. Citrus pulp and pineapple wastes have been fed safely to buffalo. In Egypt, whole sun-dried dates are fed to milk buffalo up to 25% of the standard feed mixture.
Reproduction
Swamp buffaloes generally become reproductive at an older age than river breeds. Young males in Egypt, India, and Pakistan are first mated around 3.0–3.5 years of age, but in Italy, they may be used as early as 2 years of age. Successful mating behaviour may continue until the animal is 12 years or even older. A good river buffalo male can impregnate 100 females in a year. A strong seasonal influence on mating occurs. Heat stress reduces libido.
Although water buffaloes are polyoestrous, their reproductive efficiency shows wide variation throughout the year. The cows exhibit a distinct seasonal change in displaying oestrus, conception rate, and calving rate. The age at the first oestrus of heifers varies between breeds from 13 to 33 months, but mating at the first oestrus is often infertile and usually deferred until they are 3 years old. Gestation lasts from 281 to 334 days, but most reports give a range between 300 and 320 days. Swamp buffaloes carry their calves for one or two weeks longer than river buffaloes. Finding water buffaloes that continue to work well at the age of 30 is not uncommon, and instances of a working life of 40 years have been recorded.
Domestication and breeding
The most probable ancestor of domesticated water buffalo is the wild water buffalo (Bubalus arnee), which is native to the Indian subcontinent and tropical Southeast Asia. Two types of domesticated water buffalo are recognized, based on morphological and behavioural criteria – the river buffalo (of the western Indian subcontinent and west to the Levant, the Balkans, and the Mediterranean) and the swamp buffalo (found from Assam and East India in the west, east to the Yangtze Valley of China, and south through Indochina and Southeast Asia).
River- and swamp-type water buffalo are believed to have been domesticated independently. Results of a phylogenetic study indicate that the river-type water buffalo probably originated in western India and was probable domesticated about 6,300 years ago; the swamp-type originated independently from Mainland Southeast Asia, being domesticated between 3-7,000 years ago. The river buffalo dispersed west as far as Egypt, southern Europe, the Levant, and the Mediterranean regions; swamp buffalo dispersed in the opposite direction, to the rest of Southeast Asia, and as far as the Yangtze Valley in China.
Swamp-type water buffalo entered Island Southeast Asia from at least 2,500 years ago through the northern Philippines, where butchered remains of domesticated water buffalo have been recovered from the Neolithic Nagsabaran site (part of the Lal-lo and Gattaran Shell Middens, to 400 CE). These became the ancestors of the distinctly swamp-type carabao buffalo breed of the Philippines which, in turn, spread to Guam, Indonesia, and Malaysia, among other smaller islands.
The present-day river buffalo is the result of complex domestication processes involving more than one maternal lineage and a significant maternal gene flow from wild populations after the initial domestication events. Twenty-two breeds of the river buffalo are known, including the Murrah, NiliRavi, Surti, Carabao, Anatolian, Mediterranean, and Egyptian buffaloes. China has a huge variety of water buffalo genetic resources, with 16 local swamp buffalo breeds in various regions.
Genetic studies
Results of mitochondrial DNA analyses indicate that the two types were domesticated independently. Sequencing of cytochrome b (CytB) genes of Bubalus species implies that the water buffalo originated from at least two populations, and that the river-type and the swamp-type have differentiated at the full species level. The genetic distance between the two types is so large that a divergence time of about 1.7 million years has been suggested. The swamp-type was noticed to have the closest relationship with the tamaraw of the northern Philippines.
A 2008 DNA analysis of Neolithic water buffalo remains in northern China (previously used as evidence of a Chinese domestication origin) found that the remains were of the extinct Bubalus mephistopheles and are not genetically related to modern domesticated water buffaloes. Another study in 2004 also concluded that the remains were from wild specimens. Both indicate that water buffaloes were first domesticated outside of China. Analyses of mitochondrial DNA and single-nucleotide polymorphism indicate that swamp and river buffaloes were crossbred in China.
A 2020 analysis of the genomes of 91 swamp and 30 river buffaloes showed that they separated already before domestication about . A 2021 analysis of water buffalo and lowland anoa genomes unexpectedly found the anoa branching somewhere between swamp and river buffalos. A 2023 Filipino study using the CytB gene instead found the tamaraw branching between the two.
Distribution of populations
By 2011, the global water buffalo population was about 172 million. The estimated global population of water buffalo is 208,098,759 head distributed in 77 countries in five continents.
In Asia
More than 95.8% of the world population of water buffaloes are kept in Asia, including both the river-type and the swamp-type. The water buffalo population in India numbered over 97.9 million head in 2003, representing 56.5% of the world population. They are primarily of the river type, with 10 well-defined breeds: the Bhadawari, Banni, Jafarabadi, Marathwadi, Mehsana, Murrah, Nagpuri, Nili-Ravi, Pandharpuri, Surti, and Toda buffaloes. Swamp buffaloes occur only in small areas in northeastern India and are not distinguished into breeds.
In 2003, the second-largest population lived in China, with 22.76 million head, all of the swamp-type, with many breeds kept only in the lowlands, and other breeds kept only in the mountains; as of 2003, 3.2 million swamp-type carabao buffaloes were in the Philippines, nearly 3 million swamp buffaloes were in Vietnam, and roughly 773,000 buffaloes were in Bangladesh. About 750,000 head were estimated in Sri Lanka in 1997. In Japan, the water buffalo was used as a domestic animal throughout the Ryukyu Islands or Okinawa prefecture, however it is almost extinct now and mainly used as a tourist attraction. Per a 2015 report, about 836,500 water buffaloes were in Nepal.
The water buffalo is the main dairy animal in Pakistan, with 23.47 million head in 2010. Of these, 76% are kept in the Punjab. The rest are mostly kept in the province of Sindh. The water buffalo breeds used are the Nili-Ravi, Kundi, and Azi Kheli. Karachi alone has upwards of 400,000 head of water buffalo in 2021, which provide dairy as well as meat to the local population.
In Thailand, the number of water buffaloes dropped from more than 3 million head in 1996 to less than 1.24 million head in 2011. Slightly over 75% of them are kept in the country's northeastern region. By the beginning of 2012, less than one million were in the country, partly as a result of illegal shipments to neighbouring countries where sales prices are higher than in Thailand.
Water buffaloes are also present in the southern region of Iraq in the Mesopotamian Marshes. The draining of the Mesopotamian Marshes by Saddam Hussein was an attempt to punish the south for the 1991 Iraqi uprisings. After 2003 and the Firdos Square statue destruction, these lands were reflooded and a 2007 report on Maysan and Dhi Qar shows a steady increase in the number of water buffaloes. The report puts the number at 40,008 head in those two provinces.
In Europe and the Mediterranean
Water buffaloes were probably introduced to Europe from India or other eastern sources. In Italy, the Longobard King Agilulf is said to have received water buffaloes around 600 AD. These were probably a present from the Khan of the Avars, a Turkic nomadic tribe that dwelt near the Danube River at the time. Sir H. Johnston knew of a herd of water buffaloes presented by a King of Naples to the Bey of Tunis in the mid-19th century that had resumed the feral state in northern Tunis.
European water buffaloes are all of the river-type and considered to be of the same breed named the Mediterranean buffalo. In Italy, the Mediterranean type was particularly selected and is called the Mediterranea Italiana buffalo to distinguish it from other European breeds, which differ genetically. Mediterranean buffalo are also kept in Romania, Bulgaria, Greece, Serbia, Albania, Kosovo, and North Macedonia, with a few hundred in the United Kingdom, Ireland, Germany, the Netherlands, Switzerland, and Hungary. Little exchange of breeding water buffaloes has occurred among countries, so each population has its own phenotypic features and performances. In Bulgaria, they were crossbred with the Indian Murrah breed, and in Romania, some were crossbred with Bulgarian Murrah. As of 2016, about 13,000 buffaloes were in Romania, down from 289,000 in 1989.
Populations in Turkey are of the Anatolian buffalo breed.
In Australia
Between 1824 and 1849, swamp buffaloes were introduced into the Northern Territory, from Timor and Kisar and probably other islands in the Indonesian archipelago, to provide meat and hide. When the third attempt at settlement by the British on the Cobourg Peninsula was abandoned in 1849, the buffaloes were released. In the 1880s, a few river buffaloes were imported from India to Darwin for milk. Water buffalo have been the main grazing animals on the subcoastal plains and river basins between Darwin and Arnhem Land (the "Top End") since the 1880s. They became feral and caused significant environmental damage. Their only natural predators in Australia are crocodiles and dingoes, which can only prey on the younger animals. As a result, they were hunted in the Top End from 1885 until 1980.
In the early 1960s, an estimated population of 150,000 to 200,000 water buffaloes was living in the plains and nearby areas. The commencement of the brucellosis and tuberculosis campaign (BTEC) resulted in a huge culling program to reduce water buffalo herds to a fraction of the numbers that were reached in the 1980s. The BTEC was finished when the Northern Territory was declared free of the disease in 1997. Numbers dropped dramatically as a result of the campaign, but had recovered to an estimated 150,000 animals across northern Australia in 2008, and up to an estimated 200,000 by 2022. Both swamp and river buffaloes exist in feral populations, but swamp buffaloes are more prevalent than river buffaloes.
Significance to Aboriginal peoples
"Nganabbarru" is the Bininj Kunwok word for buffalo, which are represented in rock art paintings at Djabidjbakalloi. The buffalo left behind after the failed British attempt at settlement became a threat to the local Aboriginal peoples, as they had no guns at that time. As the herds expanded across into Arnhem Land, some local people seized the chance to hunt the animals for their hides in the 1880s, as they did not belong to anyone, unlike sheep and cattle. The industry continues to provide employment opportunities and income for traditional owners.
Uses
During the 1950s, water buffaloes were hunted for their skins and meat, which was exported and used in the local trade. In the late 1970s, live exports were made to Cuba and continued later into other countries. Swamp buffaloes are now crossed with river buffaloes in artificial insemination programs, and are kept in many areas of Australia. Some of these crossbreeds are used for milk production. Melville Island is a popular hunting location, where a steady population up to 4,000 individuals exists. Safari outfits are run from Darwin to Melville Island and other locations in the Top End, often with the use of bush pilots; buffalo horns, which can measure up to a record of tip-to-tip, are prized hunting trophies.
Water buffaloes were exported live to Indonesia until 2011, at a rate of about 3,000 per year. After the live export ban that year, the exports dropped to zero, and had not resumed as of June 2013. Tom Dawkins, CEO of NT Buffalo Industry Council, said in May 2022 that culling should be a last resort, given the flourishing and growing live export trade and economic benefits for Aboriginal people. By the end of 2021, cattle exports to Indonesia had dropped to the lowest level since 2012, while demand for buffalo was growing both in Australia and in Southeast Asia.
In South America
Water buffaloes were introduced into the Amazon River basin in 1895. They are now extensively used there for meat and dairy production. In 2005, the water buffalo herd in the Brazilian Amazon stood at roughly 1.6 million head, of which 460,000 were located in the lower Amazon floodplains. The breeds used include the Mediterranean from Italy, the Murrah and Jafarabadi from India, and the carabao from the Philippines. The official Brazilian herd number in 2019 is 1.39 million head.
During the 1970s, small herds were imported to Costa Rica, Ecuador, Cayenne, Panama, Suriname, Guyana, and Venezuela.
In Argentina, many game ranches raise water buffaloes for commercial hunting.
Other important herds in South America are Colombia (>300.000), Argentina (>100.000) and Venezuela with unconfirmed reports ranging from 200 to 500 thousand head.
In North America
In 1974, four water buffaloes were imported to the United States from Guam to be studied at the University of Florida. In February 1978, the first herd arrived for commercial farming. Until 2002, only one commercial breeder was in the United States. Water buffalo meat is imported from Australia. Until 2011, water buffaloes were raised in Gainesville, Florida, from young obtained from zoo overflow. They were used primarily for meat production, and frequently sold as hamburger. Other U.S. ranchers use them for production of high-quality mozzarella cheese. Water buffaloes are also kept in the Caribbean, specifically in Trinidad and Tobago and Cuba.
Husbandry
The husbandry system of water buffaloes depends on the purpose for which they are bred and maintained. Most of them are kept by people who work on small farms in family units. Their water buffaloes live in close association with them, and are often their greatest capital asset. The women and girls in India generally look after the milking buffaloes, while the men and boys are concerned with the working animals. Throughout Asia, they are commonly tended by children who are often seen leading or riding their charges to wallows. Water buffaloes are the ideal animals for work in the deep mud of paddy fields because of their large hooves and flexible foot joints. They are often referred to as "the living tractor of the East". They are the most efficient and economical means of cultivation of small fields. In most rice-producing countries, they are used for threshing and for transporting the sheaves during the rice harvest. They provide power for oilseed mills, sugarcane presses, and devices for raising water. They are widely used as pack animals, and in India and Pakistan, for heavy haulage, also. In their invasions of Europe, the Turks used water buffaloes for hauling heavy battering rams. Their dung is used as a fertilizer, and as a fuel when dried.
Around 26 million water buffaloes are slaughtered each year for meat worldwide. They contribute 72 million tonnes of milk and three million tonnes of meat annually to world food, much of it in areas that are prone to nutritional imbalances. In India, river buffaloes are kept mainly for milk production and for transport, whereas swamp buffaloes are kept mainly for work and a small amount of milk.
Dairy products
Water buffalo milk presents physicochemical features different from those of other ruminant species, such as a higher content of fatty acids and proteins. The physical and chemical parameters of swamp-type and river-type water buffalo milk differ.
Water buffalo milk contains higher levels of total solids, crude protein, fat, calcium, and phosphorus, and slightly higher content of lactose compared with those of cow milk. The high level of total solids makes water buffalo milk ideal for processing into value-added dairy products such as cheese. The conjugated linoleic acid content in water buffalo milk ranged from 4.4 mg/g fat in September to 7.6 mg/g fat in June. Seasons and genetics may play a role in variation of CLA level and changes in gross composition of water buffalo milk.
Water buffalo milk is processed into a large variety of dairy products, including:
Cream churns much faster at higher fat levels and gives higher overrun than cow cream.
Butter from water buffalo cream displays more stability than that from cow cream.
Ghee from water buffalo milk has a different texture with a bigger grain size than ghee from cow milk.
Heat-concentrated milk products in the Indian subcontinent include paneer, khoa, rabri, kheer and basundi.
Fermented milk products include dahi, yogurt and strained yogurt.
Whey is used for making ricotta and mascarpone in Italy, and alkarish in Syria and Egypt.
Hard cheeses include braila in Romania, and rahss in Egypt.
Soft cheeses include mozzarella in Italy, karish, mish and madhfor in Iraq, alghab in Syria, kesong puti in the Philippines, and vladeasa in Romania.
Meat and skin products
Water buffalo meat, sometimes called "carabeef", is often passed off as beef in certain regions, and is also a major source of export revenue for India. In many Asian regions, water buffalo meat is less preferred due to its toughness; however, recipes have evolved (rendang, for example) where the slow cooking process and spices not only make the meat palatable, but also preserve it, an important factor in hot climates where refrigeration is not always available.
Their hides provide tough and useful leather, often used for shoes.
Bone and horn products
The bones and horns are often made into jewellery, especially earrings. Horns are used for the embouchure of musical instruments, such as ney and kaval.
Environmental effects
Wildlife conservation scientists have started to recommend and use introduced populations of feral water buffaloes in far-away lands to manage uncontrolled vegetation growth in and around natural wetlands. Introduced water buffaloes at home in such environs provide cheap service by regularly grazing the uncontrolled vegetation and opening up clogged water bodies for waterfowl, wetland birds, and other wildlife. Grazing water buffaloes are sometimes used in Great Britain for conservation grazing, such as in the Chippenham Fen National Nature Reserve. The water buffaloes can better adapt to wet conditions and poor-quality vegetation than cattle.
In uncontrolled circumstances, though, water buffaloes can cause environmental damage, such as trampling vegetation, disturbing bird and reptile nesting sites, and spreading exotic weeds.
Reproductive research
In vitro fertilization
In 2004, Philippine Carabao Center (PCC) in Nueva Ecija produced the first swamp-type water buffalo born from an in vitro-produced, vitrified embryo. It was named "Glory" after President Gloria Macapagal Arroyo. Joseph Estrada's most successful project as an opposition senator, the PCC was created through Republic Act 3707, the Carabao Act of 1992.
There have been many attempts at creating hybrids between domestic cattle and domestic water buffaloes, however, to date, none have been successful; the embryos usually only get to the 8-cell stage before failing.
Cloning
The first cloned water buffalos were born in 2007. Chinese scientists used micromanipulation-based somatic cell nuclear transfer (SCNT) produce several clones of a swamp-type water buffalo. Three calves were born; two died young.
In 2007, the PCC announced plans to clone the swamp-type water buffalo. The plan was to use as a tool for genetic improvement in water buffaloes to produce "super buffalo calves" by multiplying existing germplasms, but without modifying or altering genetic material. A 2009 Voice of America article says the PCC is "close to producing the world's first water buffalo clone".
In 2009, National Dairy Research Institute (Karnal, India) cloned a river-type water buffalo using a simplified SCNT procedure they called "handmade cloning". The calf, named Samrupa, did not survive more than a week due to genetic defects. A few months later, a second cloned calf named Garima was successfully born. The Central Institute for Research on Buffaloes, India's premier research institute on water buffaloes, also became the second institute in the world to successfully clone the water buffalo in 2016.
In culture
In the Thai and Sinhalese animal and planetary zodiac, the water buffalo is the third animal zodiac of the Thai and the fourth animal zodiac of the Sinhalese people of Sri Lanka.
Some ethnic groups, such as Batak and Toraja in Indonesia and the Derung in China, sacrifice water buffaloes or kerbau (called horbo in Batak or tedong in Toraja) at several festivals.
The Minangkabau of West Sumatra adorn their houses and clothing with motifs based on the buffalo's horns as a tribute to the legend that pitted a buffalo (kabau) chosen by their kingdom against one by (traditionally) the Majapahit empire, to which their kingdom won.
In Chinese tradition, the water buffalo is associated with a contemplative life.
A water buffalo head was a symbol of death in Tibet.
The carabao is considered a national symbol of the Philippines, although this has no basis in Philippine law.
In Indian mythology, the Hindu god of death, Yama, rides on a water buffalo.
A male water buffalo is sacrificed in many parts of India during festivals associated Shaktism sect of Hinduism.
Legend has it that Chinese philosophical sage Laozi left China through the Hangu Pass riding a water buffalo.
In Gujarat and some parts of Rajasthan in India, mostly in Rayka, as well as many other communities, many worship the goddess Vihat, who uses a male water buffalo as her Vahana. Also, the goddess Varahi in Indian culture is shown to possess a water buffalo and ride it.
According to folklore, Mahishasura, a half-buffalo and half-human demon, was killed by the goddess Durga.
In Vietnam, water buffaloes are often the most valuable possession of poor farmers.
Many ethnic groups use the horns of water buffaloes as a game trophy, or for musical instruments and ornaments. Similarly, the water buffalo is the second animal zodiac in the Vietnamese zodiac.
Fighting festivals
The Pasungay Festival is held annually in the town of San Joaquin, Iloilo, the Philippines.
The Moh juj Water Buffalo Fighting Festival is held every year in Bhogali Bihu in Assam.
The Do Son Water Buffalo Fighting Festival of Vietnam is held each year on the ninth day of the eighth month of the lunar calendar at Do Son Township, Haiphong City, Vietnam. It is one of the most popular Vietnam festivals and events in Haiphong City. The preparations for this buffalo fighting festival begin from the two to three months earlier. The competing water buffalo are selected and methodically trained months in advance. It is a traditional festival of Vietnam attached to a Water God worshiping ceremony and the Hien Sinh custom to show the martial spirit of the local people of Do Son, Haiphong.
The Hai Luu Water Buffalo Fighting Festival of Vietnam has existed since the second century BC. General Lu Gia, at that time, had the water buffalo slaughtered to give a feast to the local people and the warriors, and organized buffalo fighting for amusement. Eventually, all the fighting water buffaloes will be slaughtered as tributes to the deities.
The Ko Samui Water Buffalo Fighting Festival of Thailand is a popular event held on special occasions such as New Year's Day in January, and Songkran in mid-April. This festival features head-wrestling bouts in which two male water buffaloes are pitted against one another. Unlike in Spanish-style bullfighting, wherein bulls get killed while fighting sword-wielding men, the festival held at Ko Samui is a fairly harmless contest. The fighting season varies according to ancient customs and ceremonies. The first water buffalo to turn and run away is considered the loser; the winning water buffalo becomes worth several million baht.
The Ma'Pasilaga Tedong Water Buffalo Fighting Festival, in Tana Toraja Regency of Sulawesi Island, Indonesia, is a popular event where the Rambu Solo or a Burial Festival takes place in Tana Toraja.
Racing festivals
The Carabao Carroza Festival is held annually every May in the town of Pavia, Iloilo, the Philippines.
The Kambala races of Karnataka, India, take place between October and March. The races are conducted by having the water buffaloes (bulls) run in long parallel slushy ditches, where they are driven by men standing on wooden planks drawn by the water buffaloes. The objectives of the race are to finish first and to raise the water to the greatest height. It is also a rural sport. Kambala races are arranged with competition, as well as without competition, and as a part of thanksgiving (to God) in about 50 villages of coastal Karnataka.
Chonburi Province of Thailand, and in Pakistan, annual water buffalo races are held.
The Chon Buri water buffalo racing festival, in downtown Chonburi, south of Bangkok, an annual water buffalo festival is held in mid-October. About 300 water buffaloes race in groups of five or six, spurred on by bareback jockeys wielding wooden sticks, as hundreds of spectators cheer. The water buffalo has always played an important role in agriculture in Thailand. For the farmers, it is an important festival. It is also a celebration among rice farmers before the rice harvest. At dawn, farmers walk their water buffaloes through the surrounding rice fields, splashing them with water to keep them cool before leading them to the race field.
The Babulang water buffalo racing festival in Sarawak, Malaysia, is the largest or grandest of the many rituals, ceremonies and festivals of the traditional Bisaya community of Limbang, Sarawak. Highlights are the Ratu Babulang competition and the water buffalo races, which can only be found in this town in Sarawak, Malaysia.
At the Vihear Suor village water buffalo racing festival, Cambodia, each year, people visit Buddhist temples across the country to honor their deceased loved ones during a 15-day period commonly known as the Festival of the Dead, but in Vihear Suor village, about northeast of Phnom Penh, citizens each year wrap up the festival with a water buffalo race to entertain visitors and honour a pledge made hundreds of years ago. There was a time when many village cattle which provide rural Cambodians with muscle power to plow their fields and transport agricultural products died from an unknown disease. The villagers prayed to a spirit to help save their animals from the disease and promised to show their gratitude by holding a water buffalo race each year on the last day of the "P'chum Ben" festival, as it is known in Cambodia. The race draws hundreds of spectators, who come to see riders and their animals charge down the racing field, the racers bouncing up and down on the backs of their water buffaloes, whose horns were draped with colorful cloth.
Buffalo racing in Kerala is similar to the Kambala races.
Religious festival
The Pulilan Carabao Festival is held annually every 14 and 15 May in the Philippine town of Pulilan in honor of St. Isidore the Laborer, the patron saint of farmers. As thanksgiving for a bountiful harvest every year, farmers parade their carabaos in the main town street, adorning them with garlands and other decorations. One of the highlights of the festival is the kneeling of the carabaos in front of the parish church.
| Biology and health sciences | Artiodactyla | null |
58727 | https://en.wikipedia.org/wiki/Alum | Alum | An alum () is a type of chemical compound, usually a hydrated double sulfate salt of aluminium with the general formula , such that is a monovalent cation such as potassium or ammonium. By itself, "alum" often refers to potassium alum, with the formula . Other alums are named after the monovalent ion, such as sodium alum and ammonium alum.
The name "alum" is also used, more generally, for salts with the same formula and structure, except that aluminium is replaced by another trivalent metal ion like chromium, or sulfur is replaced by another chalcogen like selenium. The most common of these analogs is chrome alum .
In most industries, the name "alum" (or "papermaker's alum") is used to refer to aluminium sulfate, , which is used for most industrial flocculation (the variable is an integer whose size depends on the amount of water absorbed into the alum). For medicine, the word "alum" may also refer to aluminium hydroxide gel used as a vaccine adjuvant.
History
Alum found at archaeological sites
The western desert of Egypt was a major source of alum substitutes in antiquity. These evaporites were mainly , , , and .
The Ancient Greek Herodotus mentions Egyptian alum as a valuable commodity in The Histories.
The production of potassium alum from alunite is archaeologically attested on the island Lesbos.
The site was abandoned during the 7th century CE, but dates back at least to the 2nd century CE. Native alumen from the island of Melos appears to have been a mixture mainly of alunogen () with potassium alum and other minor sulfates.
Alumen in Pliny and Dioscorides
A detailed description of a substance termed alumen occurs in the Roman Pliny the Elder's Natural History.
By comparing Pliny's description with the account of stypteria (στυπτηρία) given by Dioscorides, it is obvious the two are identical. Pliny informs us that a form of alumen was found naturally in the earth, and terms it salsugoterrae.
Pliny wrote that different substances were distinguished by the name of alumen, but they were all characterised by a certain degree of astringency, and were all employed for dyeing and medicine. Pliny wrote that there is another kind of alum that the ancient Greeks term schiston, and which "splits into filaments of a whitish colour". From the name schiston and the mode of formation, it seems that this kind was the salt that forms spontaneously on certain salty minerals, as alum slate and bituminous shale, and consists mainly of sulfates of iron and aluminium. One kind of alumen was a liquid, which was apt to be adulterated; but when pure it had the property of blackening when added to pomegranate juice. This property seems to characterize a solution of iron sulfate in water; a solution of ordinary (potassium) alum would possess no such property. Contamination with iron sulfate was greatly disliked as this darkened and dulled dye colours. In some places the iron sulfate may have been lacking, so the salt would be white and would be suitable, according to Pliny, for dyeing bright colors.
Pliny describes several other types of alumen but it is not clear as to what these minerals are. The alumen of the ancients, then, was not always potassium alum, not even an alkali aluminum sulfate.
Alum described in medieval texts
Alum and green vitriol (iron sulfate) both have sweetish and astringent taste, and they had overlapping uses. Therefore, through the Middle Ages, alchemists and other writers do not seem to have distinguished the two salts accurately. In the writings of the alchemists we find the words misy, sory, and chalcanthum applied to either compound; and the name atramentum sutorium, which one might expect to belong exclusively to green vitriol, applied indiscriminately to both.
Alum was the most common mordant (substance used to set dyes on fabrics) used by the dye industry, especially in Islamic countries, during the middle ages. It was the main export of the Chad region, from where it was transported to the markets of Egypt and Morocco, and then to Europe. Less significant sources were found in Egypt and Yemen.
Modern understanding of the alums
During the early 1700s, G. E. Stahl claimed that reacting sulfuric acid with limestone produced a sort of alum. The error was soon corrected by Johann Heinrich Pott and Andreas Sigismund Marggraf, who showed that the precipitate obtained when an alkali is poured into a solution of alum, namely alumina, is quite different from lime and chalk, and is one of the ingredients in common clay.
Marggraf also showed that perfect crystals with properties of alum can be obtained by dissolving alumina in sulfuric acid and adding potash or ammonia to the concentrated solution. In 1767, Torbern Bergman observed the need for potassium or ammonium sulfates to convert aluminium sulfate into alum, while sodium or calcium would not work.
The composition of common alum was determined finally by Louis Vauquelin in 1797. As soon as Martin Klaproth discovered the presence of potassium in leucite and lepidolite,
Vauquelin demonstrated that common alum is a double salt, composed of sulfuric acid, alumina, and potash. In the same journal volume, Chaptal published the analysis of four different kinds of alum, namely, Roman alum, Levant alum, British alum, and an alum manufactured by himself, confirming Vauquelin's result.
Production
Some alums occur as minerals, the most important being alunite.
The most important alums – potassium, sodium, and ammonium – are produced industrially. Typical recipes involve combining aluminium sulfate and the sulfate monovalent cation. The aluminium sulfate is usually obtained by treating minerals like alum schist, bauxite and cryolite with sulfuric acid.
Types
Aluminium-based alums are named by the monovalent cation. Unlike the other alkali metals, lithium does not form alums; a fact attributed to the small size of its ion.
The most important alums are
Potassium alum, , also called "potash alum" or simply "alum"
Sodium alum, , also called "soda alum" or "SAS"
Ammonium alum,
Chemical properties
Aluminium-based alums have a number of common chemical properties. They are soluble in water, have a sweetish taste, react as acid by turning blue litmus to red, and crystallize in regular octahedra. In alums each metal ion is surrounded by six water molecules. When heated, they liquefy, and if the heating is continued, the water of crystallization is driven off, the salt froths and swells, and at last an amorphous powder remains. They are astringent and acidic.
Crystal structure
Alums crystallize in one of three different crystal structures. These classes are called α-, β- and γ-alums. The first X-ray crystal structures of alums were reported in 1927 by James M. Cork and Lawrence Bragg, and were used to develop the phase retrieval technique isomorphous replacement.
Solubility
The solubility of the various alums in water varies greatly, sodium alum being soluble readily in water, while caesium and rubidium alums are only slightly soluble. The various solubilities are shown in the following table.
At temperature , 100 parts water dissolve:
{| class="wikitable" style="text-align:right;"
|-
! !! Ammonium !! Potassium !! Rubidium !! Caesium
|-
| 0 °C || 2.62 || 3.90 || 0.71 || 0.19
|-
| 10 °C || 4.50 || 9.52 || 1.09 || 0.29
|-
| 50 °C || 15.9 || 44.11 || 4.98 || 1.235
|-
| 80 °C || 35.20 || 134.47 || 21.60 || 5.29
|-
| 100 °C || 70.83 || 357.48 || - || -
|}
Uses
Aluminium-based alums have been used since antiquity, and are still important for many industrial processes. The most widely used alum is potassium alum. It has been used since antiquity as a flocculant to clarify turbid liquids, as a mordant in dyeing, and in tanning. It is still widely used in water treatment, for medicine, for cosmetics (in deodorant), for food preparation (in baking powder and pickling), and to fire-proof paper and cloth.
Alum is also used as a styptic, in styptic pencils available from pharmacists, or as an alum block, available from barber shops and gentlemen's outfitters, to stem bleeding from shaving nicks; and as an astringent. An alum block can be used directly as a perfume-free deodorant (antiperspirant), and unprocessed mineral alum is sold in Indian bazaars for just that purpose. Throughout Island Southeast Asia, potassium alum is most widely known as tawas and has numerous uses. It is used as a traditional antiperspirant and deodorant, and in traditional medicine for open wounds and sores. The crystals are usually ground into a fine powder before use.
During the 19th century, alum was used along with other substances like plaster of Paris to adulterate certain food products, particularly bread. It was used to make lower-grade flour appear whiter, allowing the producers to spend less on whiter flour. Because it retains water, it would make the bread heavier, meaning that merchants could charge more for it in their shops. The amount of alum present in each loaf of bread could reach concentrations that would be toxic to humans and cause chronic diarrhea, which could result in the death of young children.
Alum is used as a mordant in traditional textiles; and in Indonesia and the Philippines, solutions of tawas, salt, borax, and organic pigments were used to change the color of gold ornaments. In the Philippines, alum crystals were also burned and allowed to drip into a basin of water by babaylan for divination. It is also used in other rituals in the animistic anito religions of the islands.
For traditional Japanese art, alum and animal glue were dissolved in water, forming a liquid known as dousa (), and used as an undercoat for paper sizing.
Alum in the form of potassium aluminium sulphate or ammonium aluminium sulfate in a concentrated bath of hot water is regularly used by jewelers and machinists to dissolve hardened steel drill bits that have broken off in items made of aluminum, copper, brass, gold (any karat), silver (both sterling and fine) and stainless steel. This is because alum does not react chemically to any significant degree with any of these metals, but will corrode carbon steel. When heat is applied to an alum mixture holding a piece of work that has a drill bit stuck in it, if the lost bit is small enough, it can sometimes be dissolved / removed within hours.
Related compounds
Many trivalent metals are capable of forming alums. The general form of an alum is , where is an alkali metal or ammonium, is a trivalent metal, and often is 12. The most important example is chrome alum, , a dark violet crystalline double sulfate of chromium and potassium, was used in tanning.
In general, alums are formed more easily when the alkali metal atom is larger. This rule was first stated by Locke in 1902, who found that if a trivalent metal does not form a caesium alum, it neither will form an alum with any other alkali metal or with ammonium.
Selenate-containing alums
Selenium or selenate alums are also known that contain selenium in place of sulfur in the sulfate anion, making selenate () instead. They are strong oxidizing agents.
Mixed alums
In some cases, solid solutions of alums with different monovalent and trivalent cations may occur.
Other hydrates
In addition to the alums, which are dodecahydrates, double sulfates and selenates of univalent and trivalent cations occur with other degrees of hydration. These materials may also be referred to as alums, including the undecahydrates such as mendozite and kalinite, hexahydrates such as guanidinium and dimethylammonium "alums", tetrahydrates such as goldichite, monohydrates such as thallium plutonium sulfate and anhydrous alums (yavapaiites). These classes include differing, but overlapping, combinations of ions.
Other double sulfates
A pseudo alum is a double sulfate of the typical formula , such that
is a divalent metal ion, such as cobalt (wupatkiite), manganese (apjohnite), magnesium (pickingerite) or iron (halotrichite or feather alum), and is a trivalent metal ion.
Double sulfates with the general formula are also known, where is a monovalent cation such as sodium, potassium, rubidium, caesium, thallium, ammonium, or (), methylammonium (), hydroxylammonium () or hydrazinium () and is a trivalent metal ion, such as aluminium, chromium, titanium, manganese, vanadium, iron, cobalt, gallium, molybdenum, indium, ruthenium, rhodium, or iridium.
Analogous selenates also occur. The possible combinations of univalent cation, trivalent cation, and anion depends on the sizes of the ions.
A Tutton salt is a double sulfate of the typical formula , where is a monovalent cation, and a divalent metal ion.
Double sulfates of the composition , such that is a monovalent cation and is a divalent metal ion are referred to as langbeinites, after the prototypical potassium magnesium sulfate.
| Physical sciences | Salts | null |
58811 | https://en.wikipedia.org/wiki/Teff | Teff | Teff (), also known as Eragrostis tef, Williams lovegrass, or annual bunch grass, is an annual grass, a species of lovegrass native to Ethiopia, where it first originated in the Ethiopian Highlands. It is cultivated for its edible seeds, also known as teff. Teff was one of the earliest plants domesticated. It is one of the most important staple crops in Ethiopia.
Description
Eragrostis tef is a self pollinated tetraploid annual cereal grass. Teff is a plant, which allows it to more efficiently fix carbon in drought and high temperatures, and is an intermediate between a tropical and temperate grass. The name teff is thought to originate from the Amharic word teffa, which means "lost". This probably refers to its tiny seeds, which have a diameter smaller than . Teff is a fine-stemmed, tufted grass with large crowns and many tillers. Its roots are shallow, but develop a massive fibrous rooting system. The plant height varies depending on the cultivation variety and the environmental conditions. As with many ancient crops, teff is quite adaptive and can grow in various environmental conditions; particularly, teff can be cultivated in dry environments, but also under wet conditions on marginal soils.
Teff originated in the Ethiopian Highlands, and it is one of the most important cereals in Ethiopia and Eritrea. It is grown for its edible seeds and also for its straw to feed the cattle. The seeds are very small, about a millimeter in length, and a thousand grains weigh approximately . They can have a color from a white to a deep reddish brown. Teff is similar to millet and quinoa in cooking, but the seed is much smaller and cooks faster, thus using less fuel.
Distribution
Teff is mainly cultivated in Ethiopia and Eritrea. It is one of the most important staple crops in these two countries, where it is used to make injera. In 2016, Ethiopia grew more than 90 percent of the world's teff. It is now also marginally cultivated in India, Australia, Germany, the Netherlands, Spain, and the US, particularly in Idaho, California, Texas, and Nevada. Because of its very small seeds, a handful is enough to sow a large area. This property makes teff particularly suited to a seminomadic lifestyle.
History
Teff is believed to have originated in Ethiopia between 4000 BC and 1000 BC. Genetic evidence points to E. pilosa as the most likely wild ancestor. A 19th-century identification of teff seeds from an ancient Egyptian site is now considered doubtful; the seeds in question (no longer available for study) are more likely of E. aegyptiaca, a common wild grass in Egypt.
Teff is the most important commodity produced and consumed in Ethiopia where the flat pancake-like injera provides a livelihood for around 6.5 million small farmers in the country. In 2006, the Ethiopian government outlawed the export of raw teff, fearing export-driven domestic shortages like those suffered by South American countries after the explosion of quinoa consumption in Europe and the US. Processed teff, namely injera, could still be exported and was mainly bought by the Ethiopian and Eritrean diaspora living in northern Europe, the Middle East and North America. After a few years, fears of a domestic shortage of teff in the scenario of an international market opening decreased. Teff yields had been increasing by 40 to 50% over the five previous years while prices had remained stable in Ethiopia. This led the government to partially lift the export ban in 2015. To ensure that the domestic production would not be minimized, the export licenses have only been granted to 48 commercial farmers which had not cultivated the plant before. Lack of mechanization is a barrier to potential increases in teff exports. Yet the increasing demand, rising by 7–10% per year, and the subsequent increase in exports is encouraging the country to speed up the modernization of agriculture and is also boosting research. Because of its potential as an economic success, a few other countries, including the US and some European countries, are already cultivating teff and selling it on domestic markets.
Uses
Teff is a multipurpose crop which has a high importance for the Ethiopian diet and culture. In Ethiopia, teff provides two-thirds of the daily protein intake. It is not only important for human nutrition, but also as fodder for livestock, or as building material. Teff is the main ingredient to prepare injera, a sourdough-risen flatbread. During meals, it is often eaten with meat or ground pulses. Sometimes it is also eaten as porridge. Moreover, teff can be used to prepare alcoholic drinks, called arak'e or katikalla or beer, called t'ella or fersso. Finally, due to its high mineral content, teff is also mixed with soybeans, chickpeas or other grains to manufacture baby foods.
According to a study in Ethiopia, farmers indicated a preference among consumers for white teff over darker colored varieties. As a nutritious fodder, teff is used to feed ruminants in Ethiopia and horses in the United States. It is a source of animal feed, especially during the dry season, and it is often preferred over straw from other cereals. Teff grass can be used as a construction material when mixed with mud to plaster the walls of local grain storage facilities.
Ecology
Teff is adaptable and it can grow in various environments, at altitudes ranging from sea level to . However, it does not tolerate frost. Highest yields are obtained when teff is grown between , with an annual rainfall of , and daily temperatures range from . Yields decrease when annual rainfall falls below 250 mm and when the average temperature during pollination exceeds 22 °C. Despite its superficial root system, teff is quite drought-resistant thanks to its ability to regenerate rapidly after a moderate water stress and to produce fruits in a short time span. It is daylight-sensitive and flowers best with 12 hours of daylight. Teff is usually cultivated on pH neutral soils, but it was noticed that it could sustain acidity up to a pH below 5. Teff has a C4 photosynthesis mechanism.
Cultivation
The cultivation of teff is labor-intensive and the small size of its seeds makes it difficult to handle and transport without loss. In Ethiopia, teff is mostly produced during the main rain season, between July and November. It is known as an "emergency crop" because it is planted late in the season, when the temperatures are warmer, and most other crops have already been planted. Teff germination generally occurs 3–12 days after sowing. Optimal germination temperatures range from 15 to 35 °C; below 10 °C, germination almost does not occur. Teff is traditionally sown or broadcast by hand, on firm, humid soil. Usual sowing density ranges from 15 to 20 kg/ha, though farmers can sow up to 50 kg/ha, because the seeds are hard to spread equally and a higher sowing density helps to reduce weed competition at the early stage. Seeds are either left at the soil surface or slightly covered by a thin layer of soil, but must not be planted at a depth greater than 1 cm. The field can be subsequently rolled. Sowing can also be done mechanically; row planting reduces lodging.
Recommended fertilization doses are 25–60 kg/ha for N, and 10–18 kg/ha for P. Teff responds more to nitrogen than to phosphorus; thus, high nitrogen inputs increase the biomass production and size of the plants, thereby increasing lodging. To avoid this, farmers can decrease nitrogen input, cultivate teff after a legume crop or adjust sowing time so that the rains have stopped when the crop reaches heading stage. In Ethiopia, teff is commonly used in crop rotations with other cereals and legumes.
Teff is harvested 2–6 months after sowing, when the vegetative parts start to turn yellow. If teff is harvested past its maturation, seeds will fall off, especially in windy or rainy weather conditions. In Ethiopia, harvest lasts from November to January; harvest is usually done manually, with sickles. Farmers cut the plants at the soil surface, pile them up in the field and transport them to the threshing area. Teff is traditionally threshed by using animals walking on the harvest. Alternatively, some farmers rent threshing machines used for other cereals. The seeds are easy to store, as they are resistant to most pests during storage. Teff seeds can stay viable several years if direct contact with humidity and sun is avoided. Average yields in Ethiopia reach around two tonnes per ha. One single inflorescence can produce up to 1000 seeds, and one plant up to 10,000. Moreover, teff offers some promising opportunities for breeding programs: the first draft of the Eragrostis tef genome was published in 2014 and research institutes have started selecting for more resistant varieties. In 1996, the US National Research Council characterized teff as having the "potential to improve nutrition, boost food security, foster rural development and support sustainable landcare."
Challenges and prospects
The major challenges in teff production are its low yield and high susceptibility to lodging. Efforts to conventionally breed teff towards higher yields started in the 1950s and led to an average annual increase in yield of 0.8%. However, no considerable improvements concerning the susceptibility of lodging have been made, due mainly to low demand outside of Ethiopia and Eritrea.
High-yielding varieties, such as Quencho, were widely adopted by farmers in Ethiopia. Sequencing of the teff genome improved breeding, and an ethyl methanesulphonate (EMS)-mutagenized population was then used to breed the first semi-dwarf lodging-tolerant teff line, called kegne. In 2015, researchers tested 28 new teff varieties and identified three promising lines that generated yields of up to 4.7 tonnes per ha.
The "Teff Improvement Project" marked a milestone by releasing the first teff variety Tesfa to the Ethiopian markets in March 2017. Areas of further development include: "(i) improving productivity of teff; (ii) overcoming the lodging malady; (iii) developing climate-smart and appropriate crop and soil management options; (iv) developing tolerance to abiotic stresses such as drought and soil acidity; (v) developing suitable pre- and post-harvest mechanization technologies suitable for smallholder farmers as well as commercial farms; (vi) food processing and nutrition aspects with special attention to the development of different food recipes and value-added products; (vii) developing crop protection measures against diseases, insect pests and weeds; and (viii) improving or strengthening socio-economics and agricultural extension services."
Pests
The tef shoot fly (Atherigona hyalinipennis) is a major pest of the crop.
Other insect pests include:
central shoot fly Delia arambourgi (seedling feeder)
Wello-bush cricket Decticoides brevipennis (flower feeder)
red tef worm Mentaxya ignicollis
tef epilachna beetle Chnootriba similis (leaf feeder); also transmits rice yellow mottle virus in rice
chrysomelid black beetle Erlangerius niger (adults feed on developing grains and leaves)
stem-boring wasp Eurytomocharis eragrostidis in the United States
Nutritional value
Uncooked teff is 9% water, 73% carbohydrates, 13% protein, and 2% fat. Cooked teff is 75% water, 20% carbohydrates, 4% protein, and less than 1% fat (table). A reference serving of cooked teff provides of food energy, is a rich source of protein, dietary fiber, and manganese, and contains moderate amounts of thiamin, phosphorus, iron, magnesium, and zinc (table). The fiber content in teff is also higher than in most other cereals.
Teff is gluten free, and a method has been developed to process teff into a flour with a wider range of baking applications, such as for bread and pasta.
Patent and bio-piracy
In 2003, a Dutch company, Health and Performance Food International (HPFI), paired with the Ethiopian Institute of Biodiversity Conservation to introduce teff to European markets. The original agreement was for Ethiopia to provide HPFI with a dozen strains of teff to market globally, and the two entities would split the proceeds.
HPFI's CEO, Jans Roosjen, had taken out two patents on teff in 2003 and 2007, claiming that his way of milling and storing the flour was unique. HPFI went bankrupt in 2009, allowing Roosjen to continue to utilize those patents and the marketing rights for the grain while being freed from the original agreement with Ethiopia. Ethiopia only received 4,000 euros over five years of collaboration.
Roosjen ended up suing a Dutch bakery company, Bakers, for patent infringement because they were selling teff baked goods. The Dutch patent office declared that the patent was void, citing that the methods used to bake and mix flours were "general professional knowledge". The deadline for Roosjen to appeal the decision expired in 2019, officially allowing Ethiopia access to Dutch teff markets.
However, Roosjen's company Ancientgrains BV still maintains patent rights in Belgium, Germany, Britain, Austria and Italy.
Teff is inherent to Ethiopia's national culture and identity, and the government of Ethiopia has expressed intent to hold Roosjen accountable to the fullest extent of international patent law, as well as to regain ownership over international markets of its most important food.
| Biology and health sciences | Grains | Plants |
58859 | https://en.wikipedia.org/wiki/Allergen | Allergen | An allergen is an otherwise harmless substance that triggers an allergic reaction in sensitive individuals by stimulating an immune response.
In technical terms, an allergen is an antigen that is capable of stimulating a type-I hypersensitivity reaction in atopic individuals through immunoglobulin E (IgE) responses. Most humans mount significant Immunoglobulin E responses only as a defense against parasitic infections. However, some individuals may respond to many common environmental antigens. This hereditary predisposition is called atopy. In atopic individuals, non-parasitic antigens stimulate inappropriate IgE production, leading to type I hypersensitivity.
Sensitivities vary widely from one person (or from one animal) to another. A very broad range of substances can be allergens to sensitive individuals.
Examples
Allergens can be found in a variety of sources, such as dust mite excretion, pollen, pet dander, or even royal jelly. Food allergies are not as common as food sensitivity, but some foods such as peanuts (a legume), nuts, seafood and shellfish are the cause of serious allergies in many people.
The United States Food and Drug Administration recognizes nine foods as major food allergens: peanuts, tree nuts, eggs, milk, shellfish, fish, wheat, soy, and most recently sesame, as well as sulfites (chemical-based, often found in flavors and colors in foods) at 10ppm and over. In other countries, due to differences in the genetic profiles of their citizens and different levels of exposure to specific foods, the official allergen lists will vary. Canada recognizes all nine of the allergens recognized by the US as well as mustard. The European Union additionally recognizes other gluten-containing cereals as well as celery and lupin.
Another allergen is urushiol, a resin produced by poison ivy and poison oak, which causes the skin rash condition known as urushiol-induced contact dermatitis by changing a skin cell's configuration so that it is no longer recognized by the immune system as part of the body. Various trees and wood products such as paper, cardboard, MDF etc. can also cause mild to severe allergy symptoms through touch or inhalation of sawdust such as asthma and skin rash.
An allergic reaction can be caused by any form of direct contact with the allergen—consuming food or drink one is sensitive to (ingestion), breathing in pollen, perfume or pet dander (inhalation), or brushing a body part against an allergy-causing plant (direct contact). Other common causes of serious allergy are wasp, fire ant and bee stings, penicillin, and latex. An extremely serious form of an allergic reaction is called anaphylaxis. One form of treatment is the administration of sterile epinephrine to the person experiencing anaphylaxis, which suppresses the body's overreaction to the allergen, and allows for the patient to be transported to a medical facility.
Common
In addition to foreign proteins found in foreign serum (from blood transfusions) and vaccines, common allergens include:
Animal products
Fel d 1 (Allergy to cats)
fur and dander
cockroach calyx
wool
dust mite excretion
Drugs
penicillin
sulfonamides
salicylates (also found naturally in numerous fruits)
Foods
celery and celeriac
corn or maize
eggs (typically albumen, the white)
fruit
pumpkin, egg-plant
legumes
beans
peas
peanuts
soybeans
milk
seafood
sesame
soy
tree nuts
pecans
almonds
wheat
Insect stings
bee sting venom
wasp sting venom
mosquito bites
Mold spores
Top 5 allergens discovered in patch tests in 2005–06:
nickel sulfate (19.0%)
Balsam of Peru (11.9%)
fragrance mix I (11.5%)
quaternium-15 (10.3%), and
neomycin (10.0%).
Metals
nickel
chromium
Other
latex
wood
Plant pollens (hay fever)
grassryegrass, timothy-grass
weedsragweed, plantago, nettle, Artemisia vulgaris, Chenopodium album, sorrel
treesbirch, alder, hazel, hornbeam, Aesculus, willow, poplar, Platanus, Tilia, Olea, Ashe juniper, Alstonia scholaris
Seasonal
Seasonal allergy symptoms are commonly experienced during specific parts of the year, usually during spring, summer or fall when certain trees or grasses pollinate. This depends on the kind of tree or grass. For instance, some trees such as oak, elm, and maple pollinate in the spring, while grasses such as Bermuda, timothy and orchard pollinate in the summer.
Grass allergy is generally linked to hay fever because their symptoms and causes are somehow similar to each other. Symptoms include rhinitis, which causes sneezing and a runny nose, as well as allergic conjunctivitis, which includes watering and itchy eyes. Also an initial tickle on the roof of the mouth or in the back of the throat may be experienced.
Also, depending on the season, the symptoms may be more severe and people may experience coughing, wheezing, and irritability. A few people even become depressed, lose their appetite, or have problems sleeping. Moreover, since the sinuses may also become congested, some people experience headaches.
If both parents have had allergies in the past, there is a 66% chance for the individual to experience seasonal allergies, and the risk lowers to 60% if just one parent has had allergies. The immune system also has strong influence on seasonal allergies, because it reacts differently to diverse allergens like pollen. When an allergen enters the body of an individual that is predisposed to allergies, it triggers an immune reaction and the production of antibodies. These allergen antibodies migrate to mast cells lining the nose, eyes, and lungs. When an allergen drifts into the nose more than once, mast cells release a slew of chemicals or histamines that irritate and inflame the moist membranes lining the nose and produce the symptoms of an allergic reaction: scratchy throat, itching, sneezing and watery eyes. Some symptoms that differentiate allergies from a cold include:
No fever.
Mucous secretions are runny and clear.
Sneezes occurring in rapid and several sequences.
Itchy throat, ears and nose.
These symptoms usually last longer than 7–10 days.
Among seasonal allergies, there are some allergens that fuse together and produce a new type of allergy. For instance, grass pollen allergens cross-react with food allergy proteins in vegetables such as onion, lettuce, carrots, celery, and corn. Besides, the cousins of birch pollen allergens, like apples, grapes, peaches, celery, and apricots, produce severe itching in the ears and throat. The cypress pollen allergy brings a cross reactivity between diverse species like olive, privet, ash and Russian olive tree pollen allergens. In some rural areas, there is another form of seasonal grass allergy, combining airborne particles of pollen mixed with mold.
Recent research has suggested that humans might develop allergies as a defense to fight off parasites. According to Yale University Immunologist Ruslan Medzhitov, protease allergens cleave the same sensor proteins that evolved to detect proteases produced by the parasitic worms. Additionally, a new report on seasonal allergies called "Extreme allergies and Global Warming", have found that many allergy triggers are worsening due to climate change. 16 states in the United States were named as "Allergen Hotspots" for large increases in allergenic tree pollen if global warming pollution keeps increasing. Therefore, researchers on this report claimed that global warming is bad news for millions of asthmatics in the United States whose asthma attacks are triggered by seasonal allergies. Seasonal allergies are one of the main triggers for asthma, along with colds or flu, cigarette smoke and exercise. In Canada, for example, up to 75% of asthmatics also have seasonal allergies.
Diagnosis
Based on the symptoms seen on the patient, the answers given in terms of symptom evaluation and a physical exam, doctors can make a diagnosis to identify if the patient has a seasonal allergy. After performing the diagnosis, the doctor is able to tell the main cause of the allergic reaction and recommend the treatment to follow. 2 tests have to be done in order to determine the cause: a blood test and a skin test. Allergists do skin tests in one of two ways: either dropping some purified liquid of the allergen onto the skin and pricking the area with a small needle; or injecting a small amount of allergen under the skin.
Alternative tools are available to identify seasonal allergies, such as laboratory tests, imaging tests, and nasal endoscopy. In the laboratory tests, the doctor will take a nasal smear and it will be examined microscopically for factors that may indicate a cause: increased numbers of eosinophils (white blood cells), which indicates an allergic condition. If there is a high count of eosinophils, an allergic condition might be present.
Another laboratory test is the blood test for IgE (immunoglobulin production), such as the radioallergosorbent test (RAST) or the more recent enzyme allergosorbent tests (EAST), implemented to detect high levels of allergen-specific IgE in response to particular allergens. Although blood tests are less accurate than the skin tests, they can be performed on patients unable to undergo skin testing. Imaging tests can be useful to detect sinusitis in people who have chronic rhinitis, and they can work when other test results are ambiguous. There is also nasal endoscopy, wherein a tube is inserted through the nose with a small camera to view the passageways and examine any irregularities in the nose structure. Endoscopy can be used for some cases of chronic or unresponsive seasonal rhinitis.
Fungal
In 1952 basidiospores were described as being possible airborne allergens and were linked to asthma in 1969. Basidiospores are the dominant airborne fungal allergens. Fungal allergies are associated with seasonal asthma. They are considered to be a major source of airborne allergens. The basidospore family include mushrooms, rusts, smuts, brackets, and puffballs. The airborne spores from mushrooms reach levels comparable to those of mold and pollens. The levels of mushroom respiratory allergy are as high as 30 percent of those with allergic disorder, but it is believed to be less than 1 percent of food allergies. Heavy rainfall (which increases fungal spore release) is associated with increased hospital admissions of children with asthma. A study in New Zealand found that 22 percent of patients with respiratory allergic disorders tested positive for basidiospores allergies. Mushroom spore allergies can cause either immediate allergic symptomatology or delayed allergic reactions. Those with asthma are more likely to have immediate allergic reactions and those with allergic rhinitis are more likely to have delayed allergic responses. A study found that 27 percent of patients were allergic to basidiomycete mycelia extracts and 32 percent were allergic to basidiospore extracts, thus demonstrating the high incidence of fungal sensitisation in individuals with suspected allergies. It has been found that of basidiomycete cap, mycelia, and spore extracts that spore extracts are the most reliable extract for diagnosing basidiomycete allergy.
In Canada, 8% of children attending allergy clinics were found to be allergic to Ganoderma, a basidiospore. Pleurotus ostreatus, cladosporium, and Calvatia cyathiformis are significant airborne spores. Other significant fungal allergens include aspergillus and alternaria-penicillin families. In India Fomes pectinatus is a predominant air-borne allergen affecting up to 22 percent of patients with respiratory allergies. Some fungal air-borne allergens such as Coprinus comatus are associated with worsening of eczematous skin lesions. Children who are born during autumn months (during fungal spore season) are more likely to develop asthmatic symptoms later in life.
Treatment
Treatment includes over-the-counter medications, antihistamines, nasal decongestants, allergy shots, and alternative medicine. In the case of nasal symptoms, antihistamines are normally the first option. They may be taken together with pseudoephedrine to help relieve a stuffy nose and they can stop the itching and sneezing. Some over-the-counter options are Benadryl and Tavist. However, these antihistamines may cause extreme drowsiness, therefore, people are advised to not operate heavy machinery or drive while taking this kind of medication. Other side effects include dry mouth, blurred vision, constipation, difficulty with urination, confusion, and light-headedness. There is also a newer second generation of antihistamines that are generally classified as the "non-sedating antihistamines" or anti-drowsy, which include cetirizine, loratadine, and fexofenadine.
An example of nasal decongestants is pseudoephedrine and its side-effects include insomnia, restlessness, and difficulty urinating. Some other nasal sprays are available by prescription, including Azelastine and Ipratropium. Some of their side-effects include drowsiness. For eye symptoms, it is important to first bath the eyes with plain eyewashes to reduce the irritation. People should not wear contact lenses during episodes of conjunctivitis.
Allergen immunotherapy treatment involves administering doses of allergens to accustom the body to induce specific long-term tolerance. Allergy immunotherapy can be administered orally (as sublingual tablets or sublingual drops), or by injections under the skin (subcutaneous). Immunotherapy contains a small amount of the substance that triggers the allergic reactions.
Ladders are also used for egg and milk allergies as a home-based therapy mainly for children. Such methods cited in the UK involve the gradual introduction of the allergen in a cooked form where the protein allergenicity has been reduced to become less potent. By reintroducing the allergen from a fully cooked, usually baked, state research suggests that a tolerance can emerge to certain egg and milk allergies under the supervision of a dietitian or specialist. The suitability of this treatment is debated between UK and North American experts.
| Biology and health sciences | Specific diseases | Health |
58862 | https://en.wikipedia.org/wiki/Three%20utilities%20problem | Three utilities problem | The classical mathematical puzzle known as the three utilities problem or sometimes water, gas and electricity asks for non-crossing connections to be drawn between three houses and three utility companies in the plane. When posing it in the early 20th century, Henry Dudeney wrote that it was already an old problem. It is an impossible puzzle: it is not possible to connect all nine lines without crossing. Versions of the problem on nonplanar surfaces such as a torus or Möbius strip, or that allow connections to pass through other houses or utilities, can be solved.
This puzzle can be formalized as a problem in topological graph theory by asking whether the complete bipartite graph , with vertices representing the houses and utilities and edges representing their connections, has a graph embedding in the plane. The impossibility of the puzzle corresponds to the fact that is not a planar graph. Multiple proofs of this impossibility are known, and form part of the proof of Kuratowski's theorem characterizing planar graphs by two forbidden subgraphs, one of which The question of minimizing the number of crossings in drawings of complete bipartite graphs is known as Turán's brick factory problem, and for the minimum number of crossings is one.
is a graph with six vertices and nine edges, often referred to as the utility graph in reference to the problem. It has also been called the Thomsen graph after 19th-century chemist Julius Thomsen. It is a well-covered graph, the smallest triangle-free cubic graph, and the smallest non-planar minimally rigid graph.
History
A review of the history of the three utilities problem is given by . He states that most published references to the problem characterize it as "very ancient". In the earliest publication found by Kullman, names it "water, gas, and electricity". However, Dudeney states that the problem is "as old as the hills...much older than electric lighting, or even gas". Dudeney also published the same puzzle previously, in The Strand Magazine in 1913. A competing claim of priority goes to Sam Loyd, who was quoted by his son in a posthumous biography as having published the problem in 1900.
Another early version of the problem involves connecting three houses to three wells. It is stated similarly to a different (and solvable) puzzle that also involves three houses and three fountains, with all three fountains and one house touching a rectangular wall; the puzzle again involves making non-crossing connections, but only between three designated pairs of houses and wells or fountains, as in modern numberlink puzzles. Loyd's puzzle "The Quarrelsome Neighbors" similarly involves connecting three houses to three gates by three non-crossing paths (rather than nine as in the utilities problem); one house and the three gates are on the wall of a rectangular yard, which contains the other two houses within it.
As well as in the three utilities problem, the graph appears in late 19th-century and early 20th-century publications both in early studies of structural rigidity and in chemical graph theory, where Julius Thomsen proposed it in 1886 for the then-uncertain structure of benzene. In honor of Thomsen's work, is sometimes called the Thomsen graph.
Statement
The three utilities problem can be stated as follows:
The problem is an abstract mathematical puzzle which imposes constraints that would not exist in a practical engineering situation. Its mathematical formalization is part of the field of topological graph theory which studies the embedding of graphs on surfaces. An important part of the puzzle, but one that is often not stated explicitly in informal wordings of the puzzle, is that the houses, companies, and lines must all be placed on a two-dimensional surface with the topology of a plane, and that the lines are not allowed to pass through other buildings; sometimes this is enforced by showing a drawing of the houses and companies, and asking for the connections to be drawn as lines on the same drawing.
In more formal graph-theoretic terms, the problem asks whether the complete bipartite graph is a planar graph. This graph has six vertices in two subsets of three: one vertex for each house, and one for each utility. It has nine edges, one edge for each of the pairings of a house with a utility, or more abstractly one edge for each pair of a vertex in one subset and a vertex in the other subset. Planar graphs are the graphs that can be drawn without crossings in the plane, and if such a drawing could be found, it would solve the three utilities puzzle.
Puzzle solutions
Unsolvability
As it is usually presented (on a flat two-dimensional plane), the solution to the utility puzzle is "no": there is no way to make all nine connections without any of the lines crossing each other.
In other words, the graph is not planar. Kazimierz Kuratowski stated in 1930 that is nonplanar, from which it follows that the problem has no solution. , however, states that "Interestingly enough, Kuratowski did not publish a detailed proof that [ ] is non-planar".
One proof of the impossibility of finding a planar embedding of uses a case analysis involving the Jordan curve theorem. In this solution, one examines different possibilities for the locations of the vertices with respect to the 4-cycles of the graph and shows that they are all inconsistent with a planar embedding.
Alternatively, it is possible to show that any bridgeless bipartite planar graph with vertices and edges has by combining the Euler formula (where is the number of faces of a planar embedding) with the observation that the number of faces is at most half the number of edges (the vertices around each face must alternate between houses and utilities, so each face has at least four edges, and each edge belongs to exactly two faces). In the utility graph, and so in the utility graph it is untrue that . Because it does not satisfy this inequality, the utility graph cannot be planar.
Changing the rules
is a toroidal graph, which means that it can be embedded without crossings on a torus, a surface of genus one. These embeddings solve versions of the puzzle in which the houses and companies are drawn on a coffee mug or other such surface instead of a flat plane. There is even enough additional freedom on the torus to solve a version of the puzzle with four houses and four utilities. Similarly, if the three utilities puzzle is presented on a sheet of a transparent material, it may be solved after twisting and gluing the sheet to form a Möbius strip.
Another way of changing the rules of the puzzle that would make it solvable, suggested by Henry Dudeney, is to allow utility lines to pass through other houses or utilities than the ones they connect.
Properties of the utility graph
Beyond the utility puzzle, the same graph comes up in several other mathematical contexts, including rigidity theory, the classification of cages and well-covered graphs, the study of graph crossing numbers, and the theory of graph minors.
Rigidity
The utility graph is a Laman graph, meaning that for almost all placements of its vertices in the plane, there is no way to continuously move its vertices while preserving all edge lengths, other than by a rigid motion of the whole plane, and that none of its spanning subgraphs have the same rigidity property. It is the smallest example of a nonplanar Laman graph. Despite being a minimally rigid graph, it has non-rigid embeddings with special placements for its vertices. For general-position embeddings, a polynomial equation describing all possible placements with the same edge lengths has degree 16, meaning that in general there can be at most 16 placements with the same lengths. It is possible to find systems of edge lengths for which up to eight of the solutions to this equation describe realizable placements.
Other graph-theoretic properties
is a triangle-free graph, in which every vertex has exactly three neighbors (a cubic graph). Among all such graphs, it is the smallest. Therefore, it is the (3,4)-cage, the smallest graph that has three neighbors per vertex and in which the shortest cycle has length four.
Like all other complete bipartite graphs, it is a well-covered graph, meaning that every maximal independent set has the same size. In this graph, the only two maximal independent sets are the two sides of the bipartition, and are of equal sizes. is one of only seven 3-regular 3-connected well-covered graphs.
Generalizations
Two important characterizations of planar graphs, Kuratowski's theorem that the planar graphs are exactly the graphs that contain neither nor the complete graph as a subdivision, and Wagner's theorem that the planar graphs are exactly the graphs that contain neither nor as a minor, make use of and generalize the non-planarity of .
Pál Turán's "brick factory problem" asks more generally for a formula for the minimum number of crossings in a drawing of the complete bipartite graph in terms of the numbers of vertices and on the two sides of the bipartition. The utility graph may be drawn with only one crossing, but not with zero crossings, so its crossing number is one.
| Mathematics | Graph theory | null |
58863 | https://en.wikipedia.org/wiki/G%C3%B6del%27s%20incompleteness%20theorems | Gödel's incompleteness theorems | Gödel's incompleteness theorems are two theorems of mathematical logic that are concerned with the limits of in formal axiomatic theories. These results, published by Kurt Gödel in 1931, are important both in mathematical logic and in the philosophy of mathematics. The theorems are widely, but not universally, interpreted as showing that Hilbert's program to find a complete and consistent set of axioms for all mathematics is impossible.
The first incompleteness theorem states that no consistent system of axioms whose theorems can be listed by an effective procedure (i.e. an algorithm) is capable of proving all truths about the arithmetic of natural numbers. For any such consistent formal system, there will always be statements about natural numbers that are true, but that are unprovable within the system.
The second incompleteness theorem, an extension of the first, shows that the system cannot demonstrate its own consistency.
Employing a diagonal argument, Gödel's incompleteness theorems were the first of several closely related theorems on the limitations of formal systems. They were followed by Tarski's undefinability theorem on the formal undefinability of truth, Church's proof that Hilbert's Entscheidungsproblem is unsolvable, and Turing's theorem that there is no algorithm to solve the halting problem.
Formal systems: completeness, consistency, and effective axiomatization
The incompleteness theorems apply to formal systems that are of sufficient complexity to express the basic arithmetic of the natural numbers and which are consistent and effectively axiomatized. Particularly in the context of first-order logic, formal systems are also called formal theories. In general, a formal system is a deductive apparatus that consists of a particular set of axioms along with rules of symbolic manipulation (or rules of inference) that allow for the derivation of new theorems from the axioms. One example of such a system is first-order Peano arithmetic, a system in which all variables are intended to denote natural numbers. In other systems, such as set theory, only some sentences of the formal system express statements about the natural numbers. The incompleteness theorems are about formal provability within these systems, rather than about "provability" in an informal sense.
There are several properties that a formal system may have, including completeness, consistency, and the existence of an effective axiomatization. The incompleteness theorems show that systems which contain a sufficient amount of arithmetic cannot possess all three of these properties.
Effective axiomatization
A formal system is said to be effectively axiomatized (also called effectively generated) if its set of theorems is recursively enumerable. This means that there is a computer program that, in principle, could enumerate all the theorems of the system without listing any statements that are not theorems. Examples of effectively generated theories include Peano arithmetic and Zermelo–Fraenkel set theory (ZFC).
The theory known as true arithmetic consists of all true statements about the standard integers in the language of Peano arithmetic. This theory is consistent and complete, and contains a sufficient amount of arithmetic. However, it does not have a recursively enumerable set of axioms, and thus does not satisfy the hypotheses of the incompleteness theorems.
Completeness
A set of axioms is (syntactically, or negation-) complete if, for any statement in the axioms' language, that statement or its negation is provable from the axioms. This is the notion relevant for Gödel's first Incompleteness theorem. It is not to be confused with semantic completeness, which means that the set of axioms proves all the semantic tautologies of the given language. In his completeness theorem (not to be confused with the incompleteness theorems described here), Gödel proved that first-order logic is semantically complete. But it is not syntactically complete, since there are sentences expressible in the language of first-order logic that can be neither proved nor disproved from the axioms of logic alone.
In a system of mathematics, thinkers such as Hilbert believed that it was just a matter of time to find such an axiomatization that would allow one to either prove or disprove (by proving its negation) every mathematical formula.
A formal system might be syntactically incomplete by design, as logics generally are. Or it may be incomplete simply because not all the necessary axioms have been discovered or included. For example, Euclidean geometry without the parallel postulate is incomplete, because some statements in the language (such as the parallel postulate itself) can not be proved from the remaining axioms. Similarly, the theory of dense linear orders is not complete, but becomes complete with an extra axiom stating that there are no endpoints in the order. The continuum hypothesis is a statement in the language of ZFC that is not provable within ZFC, so ZFC is not complete. In this case, there is no obvious candidate for a new axiom that resolves the issue.
The theory of first-order Peano arithmetic seems consistent. Assuming this is indeed the case, note that it has an infinite but recursively enumerable set of axioms, and can encode enough arithmetic for the hypotheses of the incompleteness theorem. Thus by the first incompleteness theorem, Peano Arithmetic is not complete. The theorem gives an explicit example of a statement of arithmetic that is neither provable nor disprovable in Peano's arithmetic. Moreover, this statement is true in the usual model. In addition, no effectively axiomatized, consistent extension of Peano arithmetic can be complete.
Consistency
A set of axioms is (simply) consistent if there is no statement such that both the statement and its negation are provable from the axioms, and inconsistent otherwise. That is to say, a consistent axiomatic system is one that is free from contradiction.
Peano arithmetic is provably consistent from ZFC, but not from within itself. Similarly, ZFC is not provably consistent from within itself, but ZFC + "there exists an inaccessible cardinal" proves ZFC is consistent because if is the least such cardinal, then sitting inside the von Neumann universe is a model of ZFC, and a theory is consistent if and only if it has a model.
If one takes all statements in the language of Peano arithmetic as axioms, then this theory is complete, has a recursively enumerable set of axioms, and can describe addition and multiplication. However, it is not consistent.
Additional examples of inconsistent theories arise from the paradoxes that result when the axiom schema of unrestricted comprehension is assumed in set theory.
Systems which contain arithmetic
The incompleteness theorems apply only to formal systems which are able to prove a sufficient collection of facts about the natural numbers. One sufficient collection is the set of theorems of Robinson arithmetic . Some systems, such as Peano arithmetic, can directly express statements about natural numbers. Others, such as ZFC set theory, are able to interpret statements about natural numbers into their language. Either of these options is appropriate for the incompleteness theorems.
The theory of algebraically closed fields of a given characteristic is complete, consistent, and has an infinite but recursively enumerable set of axioms. However it is not possible to encode the integers into this theory, and the theory cannot describe arithmetic of integers. A similar example is the theory of real closed fields, which is essentially equivalent to Tarski's axioms for Euclidean geometry. So Euclidean geometry itself (in Tarski's formulation) is an example of a complete, consistent, effectively axiomatized theory.
The system of Presburger arithmetic consists of a set of axioms for the natural numbers with just the addition operation (multiplication is omitted). Presburger arithmetic is complete, consistent, and recursively enumerable and can encode addition but not multiplication of natural numbers, showing that for Gödel's theorems one needs the theory to encode not just addition but also multiplication.
has studied some weak families of arithmetic systems which allow enough arithmetic as relations to formalise Gödel numbering, but which are not strong enough to have multiplication as a function, and so fail to prove the second incompleteness theorem; that is to say, these systems are consistent and capable of proving their own consistency (see self-verifying theories).
Conflicting goals
In choosing a set of axioms, one goal is to be able to prove as many correct results as possible, without proving any incorrect results. For example, we could imagine a set of true axioms which allow us to prove every true arithmetical claim about the natural numbers . In the standard system of first-order logic, an inconsistent set of axioms will prove every statement in its language (this is sometimes called the principle of explosion), and is thus automatically complete. A set of axioms that is both complete and consistent, however, proves a maximal set of non-contradictory theorems.
The pattern illustrated in the previous sections with Peano arithmetic, ZFC, and ZFC + "there exists an inaccessible cardinal" cannot generally be broken. Here ZFC + "there exists an inaccessible cardinal" cannot from itself, be proved consistent. It is also not complete, as illustrated by the continuum hypothesis, which is unresolvable in ZFC + "there exists an inaccessible cardinal".
The first incompleteness theorem shows that, in formal systems that can express basic arithmetic, a complete and consistent finite list of axioms can never be created: each time an additional, consistent statement is added as an axiom, there are other true statements that still cannot be proved, even with the new axiom. If an axiom is ever added that makes the system complete, it does so at the cost of making the system inconsistent. It is not even possible for an infinite list of axioms to be complete, consistent, and effectively axiomatized.
First incompleteness theorem
Gödel's first incompleteness theorem first appeared as "Theorem VI" in Gödel's 1931 paper "On Formally Undecidable Propositions of Principia Mathematica and Related Systems I". The hypotheses of the theorem were improved shortly thereafter by using Rosser's trick. The resulting theorem (incorporating Rosser's improvement) may be paraphrased in English as follows, where "formal system" includes the assumption that the system is effectively generated.
First Incompleteness Theorem: "Any consistent formal system within which a certain amount of elementary arithmetic can be carried out is incomplete; i.e. there are statements of the language of which can neither be proved nor disproved in ." (Raatikainen 2020)
The unprovable statement referred to by the theorem is often referred to as "the Gödel sentence" for the system . The proof constructs a particular Gödel sentence for the system , but there are infinitely many statements in the language of the system that share the same properties, such as the conjunction of the Gödel sentence and any logically valid sentence.
Each effectively generated system has its own Gödel sentence. It is possible to define a larger system that contains the whole of plus as an additional axiom. This will not result in a complete system, because Gödel's theorem will also apply to , and thus also cannot be complete. In this case, is indeed a theorem in , because it is an axiom. Because states only that it is not provable in , no contradiction is presented by its provability within . However, because the incompleteness theorem applies to , there will be a new Gödel statement for , showing that is also incomplete. will differ from in that will refer to , rather than .
Syntactic form of the Gödel sentence
The Gödel sentence is designed to refer, indirectly, to itself. The sentence states that, when a particular sequence of steps is used to construct another sentence, that constructed sentence will not be provable in . However, the sequence of steps is such that the constructed sentence turns out to be itself. In this way, the Gödel sentence indirectly states its own unprovability within .
To prove the first incompleteness theorem, Gödel demonstrated that the notion of provability within a system could be expressed purely in terms of arithmetical functions that operate on Gödel numbers of sentences of the system. Therefore, the system, which can prove certain facts about numbers, can also indirectly prove facts about its own statements, provided that it is effectively generated. Questions about the provability of statements within the system are represented as questions about the arithmetical properties of numbers themselves, which would be decidable by the system if it were complete.
Thus, although the Gödel sentence refers indirectly to sentences of the system , when read as an arithmetical statement the Gödel sentence directly refers only to natural numbers. It asserts that no natural number has a particular property, where that property is given by a primitive recursive relation . As such, the Gödel sentence can be written in the language of arithmetic with a simple syntactic form. In particular, it can be expressed as a formula in the language of arithmetic consisting of a number of leading universal quantifiers followed by a quantifier-free body (these formulas are at level of the arithmetical hierarchy). Via the MRDP theorem, the Gödel sentence can be re-written as a statement that a particular polynomial in many variables with integer coefficients never takes the value zero when integers are substituted for its variables .
Truth of the Gödel sentence
The first incompleteness theorem shows that the Gödel sentence of an appropriate formal theory is unprovable in . Because, when interpreted as a statement about arithmetic, this unprovability is exactly what the sentence (indirectly) asserts, the Gödel sentence is, in fact, true (; also see ). For this reason, the sentence is often said to be "true but unprovable." . However, since the Gödel sentence cannot itself formally specify its intended interpretation, the truth of the sentence may only be arrived at via a meta-analysis from outside the system. In general, this meta-analysis can be carried out within the weak formal system known as primitive recursive arithmetic, which proves the implication , where is a canonical sentence asserting the consistency of (, ).
Although the Gödel sentence of a consistent theory is true as a statement about the intended interpretation of arithmetic, the Gödel sentence will be false in some nonstandard models of arithmetic, as a consequence of Gödel's completeness theorem . That theorem shows that, when a sentence is independent of a theory, the theory will have models in which the sentence is true and models in which the sentence is false. As described earlier, the Gödel sentence of a system is an arithmetical statement which claims that no number exists with a particular property. The incompleteness theorem shows that this claim will be independent of the system , and the truth of the Gödel sentence follows from the fact that no standard natural number has the property in question. Any model in which the Gödel sentence is false must contain some element which satisfies the property within that model. Such a model must be "nonstandard" – it must contain elements that do not correspond to any standard natural number (, ).
Relationship with the liar paradox
Gödel specifically cites Richard's paradox and the liar paradox as semantical analogues to his syntactical incompleteness result in the introductory section of "On Formally Undecidable Propositions in Principia Mathematica and Related Systems I". The liar paradox is the sentence "This sentence is false." An analysis of the liar sentence shows that it cannot be true (for then, as it asserts, it is false), nor can it be false (for then, it is true). A Gödel sentence for a system makes a similar assertion to the liar sentence, but with truth replaced by provability: says " is not provable in the system ." The analysis of the truth and provability of is a formalized version of the analysis of the truth of the liar sentence.
It is not possible to replace "not provable" with "false" in a Gödel sentence because the predicate " is the Gödel number of a false formula" cannot be represented as a formula of arithmetic. This result, known as Tarski's undefinability theorem, was discovered independently both by Gödel, when he was working on the proof of the incompleteness theorem, and by the theorem's namesake, Alfred Tarski.
Extensions of Gödel's original result
Compared to the theorems stated in Gödel's 1931 paper, many contemporary statements of the incompleteness theorems are more general in two ways. These generalized statements are phrased to apply to a broader class of systems, and they are phrased to incorporate weaker consistency assumptions.
Gödel demonstrated the incompleteness of the system of Principia Mathematica, a particular system of arithmetic, but a parallel demonstration could be given for any effective system of a certain expressiveness. Gödel commented on this fact in the introduction to his paper, but restricted the proof to one system for concreteness. In modern statements of the theorem, it is common to state the effectiveness and expressiveness conditions as hypotheses for the incompleteness theorem, so that it is not limited to any particular formal system. The terminology used to state these conditions was not yet developed in 1931 when Gödel published his results.
Gödel's original statement and proof of the incompleteness theorem requires the assumption that the system is not just consistent but ω-consistent. A system is ω-consistent if it is not ω-inconsistent, and is ω-inconsistent if there is a predicate such that for every specific natural number the system proves , and yet the system also proves that there exists a natural number such that (). That is, the system says that a number with property exists while denying that it has any specific value. The ω-consistency of a system implies its consistency, but consistency does not imply ω-consistency. strengthened the incompleteness theorem by finding a variation of the proof (Rosser's trick) that only requires the system to be consistent, rather than ω-consistent. This is mostly of technical interest, because all true formal theories of arithmetic (theories whose axioms are all true statements about natural numbers) are ω-consistent, and thus Gödel's theorem as originally stated applies to them. The stronger version of the incompleteness theorem that only assumes consistency, rather than ω-consistency, is now commonly known as Gödel's incompleteness theorem and as the Gödel–Rosser theorem.
Second incompleteness theorem
For each formal system containing basic arithmetic, it is possible to canonically define a formula Cons() expressing the consistency of . This formula expresses the property that "there does not exist a natural number coding a formal derivation within the system whose conclusion is a syntactic contradiction." The syntactic contradiction is often taken to be "0=1", in which case Cons() states "there is no natural number that codes a derivation of '0=1' from the axioms of ."
Gödel's second incompleteness theorem shows that, under general assumptions, this canonical consistency statement Cons() will not be provable in . The theorem first appeared as "Theorem XI" in Gödel's 1931 paper "On Formally Undecidable Propositions in Principia Mathematica and Related Systems I". In the following statement, the term "formalized system" also includes an assumption that is effectively axiomatized. This theorem states that for any consistent system F within which a certain amount of elementary arithmetic can be carried out, the consistency of F cannot be proved in F itself. This theorem is stronger than the first incompleteness theorem because the statement constructed in the first incompleteness theorem does not directly express the consistency of the system. The proof of the second incompleteness theorem is obtained by formalizing the proof of the first incompleteness theorem within the system itself.
Expressing consistency
There is a technical subtlety in the second incompleteness theorem regarding the method of expressing the consistency of as a formula in the language of . There are many ways to express the consistency of a system, and not all of them lead to the same result. The formula Cons() from the second incompleteness theorem is a particular expression of consistency.
Other formalizations of the claim that is consistent may be inequivalent in , and some may even be provable. For example, first-order Peano arithmetic (PA) can prove that "the largest consistent subset of PA" is consistent. But, because PA is consistent, the largest consistent subset of PA is just PA, so in this sense PA "proves that it is consistent". What PA does not prove is that the largest consistent subset of PA is, in fact, the whole of PA. (The term "largest consistent subset of PA" is meant here to be the largest consistent initial segment of the axioms of PA under some particular effective enumeration.)
The Hilbert–Bernays conditions
The standard proof of the second incompleteness theorem assumes that the provability predicate satisfies the Hilbert–Bernays provability conditions. Letting represent the Gödel number of a formula , the provability conditions say:
If proves , then proves .
proves 1.; that is, proves .
proves (analogue of modus ponens).
There are systems, such as Robinson arithmetic, which are strong enough to meet the assumptions of the first incompleteness theorem, but which do not prove the Hilbert–Bernays conditions. Peano arithmetic, however, is strong enough to verify these conditions, as are all theories stronger than Peano arithmetic.
Implications for consistency proofs
Gödel's second incompleteness theorem also implies that a system satisfying the technical conditions outlined above cannot prove the consistency of any system that proves the consistency of . This is because such a system can prove that if proves the consistency of , then is in fact consistent. For the claim that is consistent has form "for all numbers , has the decidable property of not being a code for a proof of contradiction in ". If were in fact inconsistent, then would prove for some that is the code of a contradiction in . But if also proved that is consistent (that is, that there is no such ), then it would itself be inconsistent. This reasoning can be formalized in to show that if is consistent, then is consistent. Since, by second incompleteness theorem, does not prove its consistency, it cannot prove the consistency of either.
This corollary of the second incompleteness theorem shows that there is no hope of proving, for example, the consistency of Peano arithmetic using any finitistic means that can be formalized in a system the consistency of which is provable in Peano arithmetic (PA). For example, the system of primitive recursive arithmetic (PRA), which is widely accepted as an accurate formalization of finitistic mathematics, is provably consistent in PA. Thus PRA cannot prove the consistency of PA. This fact is generally seen to imply that Hilbert's program, which aimed to justify the use of "ideal" (infinitistic) mathematical principles in the proofs of "real" (finitistic) mathematical statements by giving a finitistic proof that the ideal principles are consistent, cannot be carried out.
The corollary also indicates the epistemological relevance of the second incompleteness theorem. It would provide no interesting information if a system proved its consistency. This is because inconsistent theories prove everything, including their consistency. Thus a consistency proof of in would give us no clue as to whether is consistent; no doubts about the consistency of would be resolved by such a consistency proof. The interest in consistency proofs lies in the possibility of proving the consistency of a system in some system that is in some sense less doubtful than itself, for example, weaker than . For many naturally occurring theories and , such as = Zermelo–Fraenkel set theory and = primitive recursive arithmetic, the consistency of is provable in , and thus cannot prove the consistency of by the above corollary of the second incompleteness theorem.
The second incompleteness theorem does not rule out altogether the possibility of proving the consistency of a different system with different axioms. For example, Gerhard Gentzen proved the consistency of Peano arithmetic in a different system that includes an axiom asserting that the ordinal called is wellfounded; see Gentzen's consistency proof. Gentzen's theorem spurred the development of ordinal analysis in proof theory.
Examples of undecidable statements
There are two distinct senses of the word "undecidable" in mathematics and computer science. The first of these is the proof-theoretic sense used in relation to Gödel's theorems, that of a statement being neither provable nor refutable in a specified deductive system. The second sense, which will not be discussed here, is used in relation to computability theory and applies not to statements but to decision problems, which are countably infinite sets of questions each requiring a yes or no answer. Such a problem is said to be undecidable if there is no computable function that correctly answers every question in the problem set (see undecidable problem).
Because of the two meanings of the word undecidable, the term independent is sometimes used instead of undecidable for the "neither provable nor refutable" sense.
Undecidability of a statement in a particular deductive system does not, in and of itself, address the question of whether the truth value of the statement is well-defined, or whether it can be determined by other means. Undecidability only implies that the particular deductive system being considered does not prove the truth or falsity of the statement. Whether there exist so-called "absolutely undecidable" statements, whose truth value can never be known or is ill-specified, is a controversial point in the philosophy of mathematics.
The combined work of Gödel and Paul Cohen has given two concrete examples of undecidable statements (in the first sense of the term): The continuum hypothesis can neither be proved nor refuted in ZFC (the standard axiomatization of set theory), and the axiom of choice can neither be proved nor refuted in ZF (which is all the ZFC axioms except the axiom of choice). These results do not require the incompleteness theorem. Gödel proved in 1940 that neither of these statements could be disproved in ZF or ZFC set theory. In the 1960s, Cohen proved that neither is provable from ZF, and the continuum hypothesis cannot be proved from ZFC.
showed that the Whitehead problem in group theory is undecidable, in the first sense of the term, in standard set theory.
Gregory Chaitin produced undecidable statements in algorithmic information theory and proved another incompleteness theorem in that setting. Chaitin's incompleteness theorem states that for any system that can represent enough arithmetic, there is an upper bound such that no specific number can be proved in that system to have Kolmogorov complexity greater than . While Gödel's theorem is related to the liar paradox, Chaitin's result is related to Berry's paradox.
Undecidable statements provable in larger systems
These are natural mathematical equivalents of the Gödel "true but undecidable" sentence. They can be proved in a larger system which is generally accepted as a valid form of reasoning, but are undecidable in a more limited system such as Peano Arithmetic.
In 1977, Paris and Harrington proved that the Paris–Harrington principle, a version of the infinite Ramsey theorem, is undecidable in (first-order) Peano arithmetic, but can be proved in the stronger system of second-order arithmetic. Kirby and Paris later showed that Goodstein's theorem, a statement about sequences of natural numbers somewhat simpler than the Paris–Harrington principle, is also undecidable in Peano arithmetic.
Kruskal's tree theorem, which has applications in computer science, is also undecidable from Peano arithmetic but provable in set theory. In fact Kruskal's tree theorem (or its finite form) is undecidable in a much stronger system ATR0 codifying the principles acceptable based on a philosophy of mathematics called predicativism. The related but more general graph minor theorem (2003) has consequences for computational complexity theory.
Relationship with computability
The incompleteness theorem is closely related to several results about undecidable sets in recursion theory.
presented a proof of Gödel's incompleteness theorem using basic results of computability theory. One such result shows that the halting problem is undecidable: no computer program can correctly determine, given any program as input, whether eventually halts when run with a particular given input. Kleene showed that the existence of a complete effective system of arithmetic with certain consistency properties would force the halting problem to be decidable, a contradiction. This method of proof has also been presented by ; ; and .
explains how Matiyasevich's solution to Hilbert's 10th problem can be used to obtain a proof to Gödel's first incompleteness theorem. Matiyasevich proved that there is no algorithm that, given a multivariate polynomial with integer coefficients, determines whether there is an integer solution to the equation = 0. Because polynomials with integer coefficients, and integers themselves, are directly expressible in the language of arithmetic, if a multivariate integer polynomial equation = 0 does have a solution in the integers then any sufficiently strong system of arithmetic will prove this. Moreover, suppose the system is ω-consistent. In that case, it will never prove that a particular polynomial equation has a solution when there is no solution in the integers. Thus, if were complete and ω-consistent, it would be possible to determine algorithmically whether a polynomial equation has a solution by merely enumerating proofs of until either " has a solution" or " has no solution" is found, in contradiction to Matiyasevich's theorem. Hence it follows that cannot be ω-consistent and complete. Moreover, for each consistent effectively generated system , it is possible to effectively generate a multivariate polynomial over the integers such that the equation = 0 has no solutions over the integers, but the lack of solutions cannot be proved in .
shows how the existence of recursively inseparable sets can be used to prove the first incompleteness theorem. This proof is often extended to show that systems such as Peano arithmetic are essentially undecidable.
Chaitin's incompleteness theorem gives a different method of producing independent sentences, based on Kolmogorov complexity. Like the proof presented by Kleene that was mentioned above, Chaitin's theorem only applies to theories with the additional property that all their axioms are true in the standard model of the natural numbers. Gödel's incompleteness theorem is distinguished by its applicability to consistent theories that nonetheless include false statements in the standard model; these theories are known as ω-inconsistent.
Proof sketch for the first theorem
The proof by contradiction has three essential parts. To begin, choose a formal system that meets the proposed criteria:
Statements in the system can be represented by natural numbers (known as Gödel numbers). The significance of this is that properties of statements—such as their truth and falsehood—will be equivalent to determining whether their Gödel numbers have certain properties, and that properties of the statements can therefore be demonstrated by examining their Gödel numbers. This part culminates in the construction of a formula expressing the idea that "statement is provable in the system" (which can be applied to any statement "" in the system).
In the formal system it is possible to construct a number whose matching statement, when interpreted, is self-referential and essentially says that it (i.e. the statement itself) is unprovable. This is done using a technique called "diagonalization" (so-called because of its origins as Cantor's diagonal argument).
Within the formal system this statement permits a demonstration that it is neither provable nor disprovable in the system, and therefore the system cannot in fact be ω-consistent. Hence the original assumption that the proposed system met the criteria is false.
Arithmetization of syntax
The main problem in fleshing out the proof described above is that it seems at first that to construct a statement that is equivalent to " cannot be proved", would somehow have to contain a reference to , which could easily give rise to an infinite regress. Gödel's technique is to show that statements can be matched with numbers (often called the arithmetization of syntax) in such a way that "proving a statement" can be replaced with "testing whether a number has a given property". This allows a self-referential formula to be constructed in a way that avoids any infinite regress of definitions. The same technique was later used by Alan Turing in his work on the Entscheidungsproblem.
In simple terms, a method can be devised so that every formula or statement that can be formulated in the system gets a unique number, called its Gödel number, in such a way that it is possible to mechanically convert back and forth between formulas and Gödel numbers. The numbers involved might be very long indeed (in terms of number of digits), but this is not a barrier; all that matters is that such numbers can be constructed. A simple example is how English can be stored as a sequence of numbers for each letter and then combined into a single larger number:
The word hello is encoded as 104-101-108-108-111 in ASCII, which can be converted into the number 104101108108111.
The logical statement x=y => y=x is encoded as 120-061-121-032-061-062-032-121-061-120 in ASCII, which can be converted into the number 120061121032061062032121061120.
In principle, proving a statement true or false can be shown to be equivalent to proving that the number matching the statement does or does not have a given property. Because the formal system is strong enough to support reasoning about numbers in general, it can support reasoning about numbers that represent formulae and statements as well. Crucially, because the system can support reasoning about properties of numbers, the results are equivalent to reasoning about provability of their equivalent statements.
Construction of a statement about "provability"
Having shown that in principle the system can indirectly make statements about provability, by analyzing properties of those numbers representing statements it is now possible to show how to create a statement that actually does this.
A formula that contains exactly one free variable is called a statement form or class-sign. As soon as is replaced by a specific number, the statement form turns into a bona fide statement, and it is then either provable in the system, or not. For certain formulas one can show that for every natural number , is true if and only if it can be proved (the precise requirement in the original proof is weaker, but for the proof sketch this will suffice). In particular, this is true for every specific arithmetic operation between a finite number of natural numbers, such as "23 = 6".
Statement forms themselves are not statements and therefore cannot be proved or disproved. But every statement form can be assigned a Gödel number denoted by . The choice of the free variable used in the form () is not relevant to the assignment of the Gödel number .
The notion of provability itself can also be encoded by Gödel numbers, in the following way: since a proof is a list of statements which obey certain rules, the Gödel number of a proof can be defined. Now, for every statement , one may ask whether a number is the Gödel number of its proof. The relation between the Gödel number of and , the potential Gödel number of its proof, is an arithmetical relation between two numbers. Therefore, there is a statement form that uses this arithmetical relation to state that a Gödel number of a proof of exists:
( is the Gödel number of a formula and is the Gödel number of a proof of the formula encoded by ).
The name Bew is short for beweisbar, the German word for "provable"; this name was originally used by Gödel to denote the provability formula just described. Note that "" is merely an abbreviation that represents a particular, very long, formula in the original language of ; the string "" itself is not claimed to be part of this language.
An important feature of the formula is that if a statement is provable in the system then is also provable. This is because any proof of would have a corresponding Gödel number, the existence of which causes to be satisfied.
Diagonalization
The next step in the proof is to obtain a statement which, indirectly, asserts its own unprovability. Although Gödel constructed this statement directly, the existence of at least one such statement follows from the diagonal lemma, which says that for any sufficiently strong formal system and any statement form there is a statement such that the system proves
.
By letting be the negation of , we obtain the theorem
and the defined by this roughly states that its own Gödel number is the Gödel number of an unprovable formula.
The statement is not literally equal to ; rather, states that if a certain calculation is performed, the resulting Gödel number will be that of an unprovable statement. But when this calculation is performed, the resulting Gödel number turns out to be the Gödel number of itself. This is similar to the following sentence in English:
", when preceded by itself in quotes, is unprovable.", when preceded by itself in quotes, is unprovable.
This sentence does not directly refer to itself, but when the stated transformation is made the original sentence is obtained as a result, and thus this sentence indirectly asserts its own unprovability. The proof of the diagonal lemma employs a similar method.
Now, assume that the axiomatic system is ω-consistent, and let be the statement obtained in the previous section.
If were provable, then would be provable, as argued above. But asserts the negation of . Thus the system would be inconsistent, proving both a statement and its negation. This contradiction shows that cannot be provable.
If the negation of were provable, then would be provable (because was constructed to be equivalent to the negation of ). However, for each specific number , cannot be the Gödel number of the proof of , because is not provable (from the previous paragraph). Thus on one hand the system proves there is a number with a certain property (that it is the Gödel number of the proof of ), but on the other hand, for every specific number , we can prove that it does not have this property. This is impossible in an ω-consistent system. Thus the negation of is not provable.
Thus the statement is undecidable in our axiomatic system: it can neither be proved nor disproved within the system.
In fact, to show that is not provable only requires the assumption that the system is consistent. The stronger assumption of ω-consistency is required to show that the negation of is not provable. Thus, if is constructed for a particular system:
If the system is ω-consistent, it can prove neither nor its negation, and so is undecidable.
If the system is consistent, it may have the same situation, or it may prove the negation of . In the later case, we have a statement ("not ") which is false but provable, and the system is not ω-consistent.
If one tries to "add the missing axioms" to avoid the incompleteness of the system, then one has to add either or "not " as axioms. But then the definition of "being a Gödel number of a proof" of a statement changes. which means that the formula is now different. Thus when we apply the diagonal lemma to this new Bew, we obtain a new statement , different from the previous one, which will be undecidable in the new system if it is ω-consistent.
Proof via Berry's paradox
sketches an alternative proof of the first incompleteness theorem that uses Berry's paradox rather than the liar paradox to construct a true but unprovable formula. A similar proof method was independently discovered by Saul Kripke. Boolos's proof proceeds by constructing, for any computably enumerable set of true sentences of arithmetic, another sentence which is true but not contained in . This gives the first incompleteness theorem as a corollary. According to Boolos, this proof is interesting because it provides a "different sort of reason" for the incompleteness of effective, consistent theories of arithmetic.
Computer verified proofs
The incompleteness theorems are among a relatively small number of nontrivial theorems that have been transformed into formalized theorems that can be completely verified by proof assistant software. Gödel's original proofs of the incompleteness theorems, like most mathematical proofs, were written in natural language intended for human readers.
Computer-verified proofs of versions of the first incompleteness theorem were announced by Natarajan Shankar in 1986 using Nqthm , by Russell O'Connor in 2003 using Coq and by John Harrison in 2009 using HOL Light . A computer-verified proof of both incompleteness theorems was announced by Lawrence Paulson in 2013 using Isabelle .
Proof sketch for the second theorem
The main difficulty in proving the second incompleteness theorem is to show that various facts about provability used in the proof of the first incompleteness theorem can be formalized within a system using a formal predicate for provability. Once this is done, the second incompleteness theorem follows by formalizing the entire proof of the first incompleteness theorem within the system itself.
Let stand for the undecidable sentence constructed above, and assume for purposes of obtaining a contradiction that the consistency of the system can be proved from within the system itself. This is equivalent to proving the statement "System is consistent".
Now consider the statement , where = "If the system is consistent, then is not provable". The proof of sentence can be formalized within the system , and therefore the statement , " is not provable", (or identically, "not ") can be proved in the system .
Observe then, that if we can prove that the system is consistent (ie. the statement in the hypothesis of ), then we have proved that is not provable. But this is a contradiction since by the 1st Incompleteness Theorem, this sentence (ie. what is implied in the sentence , """ is not provable") is what we construct to be unprovable. Notice that this is why we require formalizing the first Incompleteness Theorem in : to prove the 2nd Incompleteness Theorem, we obtain a contradiction with the 1st Incompleteness Theorem which can do only by showing that the theorem holds in . So we cannot prove that the system is consistent. And the 2nd Incompleteness Theorem statement follows.
Discussion and implications
The incompleteness results affect the philosophy of mathematics, particularly versions of formalism, which use a single system of formal logic to define their principles.
Consequences for logicism and Hilbert's second problem
The incompleteness theorem is sometimes thought to have severe consequences for the program of logicism proposed by Gottlob Frege and Bertrand Russell, which aimed to define the natural numbers in terms of logic. Bob Hale and Crispin Wright argue that it is not a problem for logicism because the incompleteness theorems apply equally to first-order logic as they do to arithmetic. They argue that only those who believe that the natural numbers are to be defined in terms of first order logic have this problem.
Many logicians believe that Gödel's incompleteness theorems struck a fatal blow to David Hilbert's second problem, which asked for a finitary consistency proof for mathematics. The second incompleteness theorem, in particular, is often viewed as making the problem impossible. Not all mathematicians agree with this analysis, however, and the status of Hilbert's second problem is not yet decided (see "Modern viewpoints on the status of the problem").
Minds and machines
Authors including the philosopher J. R. Lucas and physicist Roger Penrose have debated what, if anything, Gödel's incompleteness theorems imply about human intelligence. Much of the debate centers on whether the human mind is equivalent to a Turing machine, or by the Church–Turing thesis, any finite machine at all. If it is, and if the machine is consistent, then Gödel's incompleteness theorems would apply to it.
suggested that while Gödel's theorems cannot be applied to humans, since they make mistakes and are therefore inconsistent, it may be applied to the human faculty of science or mathematics in general. Assuming that it is consistent, either its consistency cannot be proved or it cannot be represented by a Turing machine.
has proposed that the concept of mathematical "knowability" should be based on computational complexity rather than logical decidability. He writes that "when knowability is interpreted by modern standards, namely via computational complexity, the Gödel phenomena are very much with us."
Douglas Hofstadter, in his books Gödel, Escher, Bach and I Am a Strange Loop, cites Gödel's theorems as an example of what he calls a strange loop, a hierarchical, self-referential structure existing within an axiomatic formal system. He argues that this is the same kind of structure that gives rise to consciousness, the sense of "I", in the human mind. While the self-reference in Gödel's theorem comes from the Gödel sentence asserting its unprovability within the formal system of Principia Mathematica, the self-reference in the human mind comes from how the brain abstracts and categorises stimuli into "symbols", or groups of neurons which respond to concepts, in what is effectively also a formal system, eventually giving rise to symbols modeling the concept of the very entity doing the perception. Hofstadter argues that a strange loop in a sufficiently complex formal system can give rise to a "downward" or "upside-down" causality, a situation in which the normal hierarchy of cause-and-effect is flipped upside-down. In the case of Gödel's theorem, this manifests, in short, as the following:
Merely from knowing the formula's meaning, one can infer its truth or falsity without any effort to derive it in the old-fashioned way, which requires one to trudge methodically "upwards" from the axioms. This is not just peculiar; it is astonishing. Normally, one cannot merely look at what a mathematical conjecture says and simply appeal to the content of that statement on its own to deduce whether the statement is true or false.
In the case of the mind, a far more complex formal system, this "downward causality" manifests, in Hofstadter's view, as the ineffable human instinct that the causality of our minds lies on the high level of desires, concepts, personalities, thoughts, and ideas, rather than on the low level of interactions between neurons or even fundamental particles, even though according to physics the latter seems to possess the causal power.
There is thus a curious upside-downness to our normal human way of perceiving the world: we are built to perceive “big stuff” rather than “small stuff”, even though the domain of the tiny seems to be where the actual motors driving reality reside.
Paraconsistent logic
Although Gödel's theorems are usually studied in the context of classical logic, they also have a role in the study of paraconsistent logic and of inherently contradictory statements (dialetheia). argues that replacing the notion of formal proof in Gödel's theorem with the usual notion of informal proof can be used to show that naive mathematics is inconsistent, and uses this as evidence for dialetheism. The cause of this inconsistency is the inclusion of a truth predicate for a system within the language of the system. gives a more mixed appraisal of the applications of Gödel's theorems to dialetheism.
Appeals to the incompleteness theorems in other fields
Appeals and analogies are sometimes made to the incompleteness of theorems in support of arguments that go beyond mathematics and logic. Several authors have commented negatively on such extensions and interpretations, including , , ; and .
and , for example, quote from Rebecca Goldstein's comments on the disparity between Gödel's avowed Platonism and the anti-realist uses to which his ideas are sometimes put. criticize Régis Debray's invocation of the theorem in the context of sociology; Debray has defended this use as metaphorical (ibid.).
History
After Gödel published his proof of the completeness theorem as his doctoral thesis in 1929, he turned to a second problem for his habilitation. His original goal was to obtain a positive solution to Hilbert's second problem. At the time, theories of natural numbers and real numbers similar to second-order arithmetic were known as "analysis", while theories of natural numbers alone were known as "arithmetic".
Gödel was not the only person working on the consistency problem. Ackermann had published a flawed consistency proof for analysis in 1925, in which he attempted to use the method of ε-substitution originally developed by Hilbert. Later that year, von Neumann was able to correct the proof for a system of arithmetic without any axioms of induction. By 1928, Ackermann had communicated a modified proof to Bernays; this modified proof led Hilbert to announce his belief in 1929 that the consistency of arithmetic had been demonstrated and that a consistent proof of analysis would likely soon follow. After the publication of the incompleteness theorems showed that Ackermann's modified proof must be erroneous, von Neumann produced a concrete example showing that its main technique was unsound.
In the course of his research, Gödel discovered that although a sentence, asserting its falsehood leads to paradox, a sentence that asserts its non-provability does not. In particular, Gödel was aware of the result now called Tarski's indefinability theorem, although he never published it. Gödel announced his first incompleteness theorem to Carnap, Feigel, and Waismann on August 26, 1930; all four would attend the Second Conference on the Epistemology of the Exact Sciences, a key conference in Königsberg the following week.
Announcement
The 1930 Königsberg conference was a joint meeting of three academic societies, with many of the key logicians of the time in attendance. Carnap, Heyting, and von Neumann delivered one-hour addresses on the mathematical philosophies of logicism, intuitionism, and formalism, respectively. The conference also included Hilbert's retirement address, as he was leaving his position at the University of Göttingen. Hilbert used the speech to argue his belief that all mathematical problems can be solved. He ended his address by saying,
This speech quickly became known as a summary of Hilbert's beliefs on mathematics (its final six words, "Wir müssen wissen. Wir werden wissen!", were used as Hilbert's epitaph in 1943). Although Gödel was likely in attendance for Hilbert's address, the two never met face to face.
Gödel announced his first incompleteness theorem at a roundtable discussion session on the third day of the conference. The announcement drew little attention apart from that of von Neumann, who pulled Gödel aside for a conversation. Later that year, working independently with knowledge of the first incompleteness theorem, von Neumann obtained a proof of the second incompleteness theorem, which he announced to Gödel in a letter dated November 20, 1930. Gödel had independently obtained the second incompleteness theorem and included it in his submitted manuscript, which was received by Monatshefte für Mathematik on November 17, 1930.
Gödel's paper was published in the Monatshefte in 1931 under the title "Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme I" ("On Formally Undecidable Propositions in Principia Mathematica and Related Systems I"). As the title implies, Gödel originally planned to publish a second part of the paper in the next volume of the Monatshefte; the prompt acceptance of the first paper was one reason he changed his plans.
Generalization and acceptance
Gödel gave a series of lectures on his theorems at Princeton in 1933–1934 to an audience that included Church, Kleene, and Rosser. By this time, Gödel had grasped that the key property his theorems required is that the system must be effective (at the time, the term "general recursive" was used). Rosser proved in 1936 that the hypothesis of ω-consistency, which was an integral part of Gödel's original proof, could be replaced by simple consistency if the Gödel sentence was changed appropriately. These developments left the incompleteness theorems in essentially their modern form.
Gentzen published his consistency proof for first-order arithmetic in 1936. Hilbert accepted this proof as "finitary" although (as Gödel's theorem had already shown) it cannot be formalized within the system of arithmetic that is being proved consistent.
The impact of the incompleteness theorems on Hilbert's program was quickly realized. Bernays included a full proof of the incompleteness theorems in the second volume of Grundlagen der Mathematik (1939), along with additional results of Ackermann on the ε-substitution method and Gentzen's consistency proof of arithmetic. This was the first full published proof of the second incompleteness theorem.
Criticisms
Finsler
used a version of Richard's paradox to construct an expression that was false but unprovable in a particular, informal framework he had developed. Gödel was unaware of this paper when he proved the incompleteness theorems (Collected Works Vol. IV., p. 9). Finsler wrote to Gödel in 1931 to inform him about this paper, which Finsler felt had priority for an incompleteness theorem. Finsler's methods did not rely on formalized provability and had only a superficial resemblance to Gödel's work. Gödel read the paper but found it deeply flawed, and his response to Finsler laid out concerns about the lack of formalization. Finsler continued to argue for his philosophy of mathematics, which eschewed formalization, for the remainder of his career.
Zermelo
In September 1931, Ernst Zermelo wrote to Gödel to announce what he described as an "essential gap" in Gödel's argument. In October, Gödel replied with a 10-page letter, where he pointed out that Zermelo mistakenly assumed that the notion of truth in a system is definable in that system; it is not true in general by Tarski's undefinability theorem. However, Zermelo did not relent and published his criticisms in print with "a rather scathing paragraph on his young competitor". Gödel decided that pursuing the matter further was pointless, and Carnap agreed. Much of Zermelo's subsequent work was related to logic stronger than first-order logic, with which he hoped to show both the consistency and categoricity of mathematical theories.
Wittgenstein
Ludwig Wittgenstein wrote several passages about the incompleteness theorems that were published posthumously in his 1953 Remarks on the Foundations of Mathematics, particularly, one section sometimes called the "notorious paragraph" where he seems to confuse the notions of "true" and "provable" in Russell's system. Gödel was a member of the Vienna Circle during the period in which Wittgenstein's early ideal language philosophy and Tractatus Logico-Philosophicus dominated the circle's thinking. There has been some controversy about whether Wittgenstein misunderstood the incompleteness theorem or just expressed himself unclearly. Writings in Gödel's Nachlass express the belief that Wittgenstein misread his ideas.
Multiple commentators have read Wittgenstein as misunderstanding Gödel, although as well as have provided textual readings arguing that most commentary misunderstands Wittgenstein. On their release, Bernays, Dummett, and Kreisel wrote separate reviews on Wittgenstein's remarks, all of which were extremely negative. The unanimity of this criticism caused Wittgenstein's remarks on the incompleteness theorems to have little impact on the logic community. In 1972, Gödel stated: "Has Wittgenstein lost his mind? Does he mean it seriously? He intentionally utters trivially nonsensical statements", and wrote to Karl Menger that Wittgenstein's comments demonstrate a misunderstanding of the incompleteness theorems writing:
It is clear from the passages you cite that Wittgenstein did not understand [the first incompleteness theorem] (or pretended not to understand it). He interpreted it as a kind of logical paradox, while in fact is just the opposite, namely a mathematical theorem within an absolutely uncontroversial part of mathematics (finitary number theory or combinatorics).
Since the publication of Wittgenstein's Nachlass in 2000, a series of papers in philosophy have sought to evaluate whether the original criticism of Wittgenstein's remarks was justified. argue that Wittgenstein had a more complete understanding of the incompleteness theorem than was previously assumed. They are particularly concerned with the interpretation of a Gödel sentence for an ω-inconsistent system as saying "I am not provable", since the system has no models in which the provability predicate corresponds to actual provability. argues that their interpretation of Wittgenstein is not historically justified. explores the relationship between Wittgenstein's writing and theories of paraconsistent logic.
| Mathematics | Discrete mathematics | null |
58875 | https://en.wikipedia.org/wiki/Maclura%20pomifera | Maclura pomifera | Maclura pomifera, commonly known as the Osage orange ( ), is a small deciduous tree or large shrub, native to the south-central United States. It typically grows about tall. The distinctive fruit, a multiple fruit that resembles an immature orange, is roughly spherical, bumpy, in diameter, and turns bright yellow-green in the fall. The fruit excretes a sticky white latex when cut or damaged. Despite the name "Osage orange", it is not related to the orange. It is a member of the mulberry family, Moraceae. Due to its latex secretions and woody pulp, the fruit is typically not eaten by humans and rarely by foraging animals. Ecologists Daniel H. Janzen and Paul S. Martin proposed in 1982 that the fruit of this species might be an example of what has come to be called an evolutionary anachronism—that is, a fruit coevolved with a large animal seed dispersal partner that is now extinct. This hypothesis is controversial.
Maclura pomifera has many common names, including mock orange, hedge apple, hedge ball, monkey ball, pap, monkey brains and yellow-wood. The name bois d'arc (French, meaning "bow-wood") has also been corrupted into bodark and bodock.
History
The earliest account of the tree in the English language was given by William Dunbar, a Scottish explorer, in his narrative of a journey made in 1804 from St. Catherine's Landing on the Mississippi River to the Ouachita River. Meriwether Lewis sent some slips and cuttings of the curiosity to President Jefferson in March 1804. According to Lewis's letter, the samples were donated by "Mr. Peter Choteau, who resided the greater portion of his time for many years with the Osage Nation". (Note: This referred to Pierre Chouteau, a fur trader from Saint Louis.) Those cuttings did not survive. In 1810, Bradbury relates that he found two Maclura pomifera trees growing in the garden of Pierre Chouteau, one of the first settlers of Saint Louis, apparently the same person.
American settlers used the Osage orange (i.e. "hedge apple") as a hedge to exclude free-range livestock from vegetable gardens and corn fields. Under severe pruning, the hedge apple sprouted abundant adventitious shoots from its base; as these shoots grew, they became interwoven and formed a dense, thorny barrier hedge. The thorny Osage orange tree was widely naturalized throughout the United States until this usage was superseded by the invention of barbed wire in 1874. By providing a barrier that was "horse-high, bull-strong, and pig-tight", Osage orange hedges provided the "crucial stop-gap measure for westward expansion until the introduction of barbed wire a few decades later".
The trees were named ("bow-wood") by early French settlers who observed the wood being used for war clubs and bow-making by Native Americans. Meriwether Lewis was told that the people of the Osage Nation, "So much ... esteem the wood of this tree for the purpose of making their bows, that they travel many hundreds of miles in quest of it." The trees are also known as "bodark", "bodarc", or "bodock" trees, most likely originating as a corruption of .
The Comanche also used this wood for their bows. They liked the wood because it was strong, flexible and durable, and the bush/tree was common along river bottoms of the Comanchería. Some historians believe that the high value this wood had to Native Americans throughout North America for the making of bows, along with its small natural range, contributed to the great wealth of the Spiroan Mississippian culture that controlled all the land in which these trees grew.
Etymology
The genus Maclura is named in honor of William Maclure (1763–1840), a Scottish-born American geologist. The specific epithet pomifera means "fruit-bearing". The common name Osage derives from Osage Native Americans from whom young plants were first obtained, as told in the notes of Meriwether Lewis in 1804.
Description
General habit
Mature trees range from tall with short trunks and round-topped canopies. The roots are thick, fleshy, and covered with bright orange bark. The tree's mature bark is dark, deeply furrowed and scaly. The plant has significant potential to invade unmanaged habitats.
The wood of M. pomifera is golden to bright yellow but fades to medium brown with ultraviolet light exposure. The wood is heavy, hard, strong, and flexible, capable of receiving a fine polish and very durable in contact with the ground. It has a specific gravity of 0.7736 or .
Leaves and branches
Leaves are arranged alternately in a slender growing shoot long. In form they are simple, a long oval terminating in a slender point. The leaves are long and wide, and are thick, firm, dark green, shining above, and paler green below when full grown. In autumn they turn bright yellow. The leaf axils contain formidable spines which when mature are about long.
Branchlets are at first bright green and pubescent; during their first winter they become light brown tinged with orange, and later they become a paler orange brown. Branches contain a yellow pith, and are armed with stout, straight, axillary spines. During the winter, the branches bear lateral buds that are depressed-globular, partly immersed in the bark, and pale chestnut brown in color.
Flowers and fruit
As a dioecious plant, the inconspicuous pistillate (female) and staminate (male) flowers are found on different trees. Staminate flowers are pale green, small, and arranged in racemes borne on long, slender, drooping peduncles developed from the axils of crowded leaves on the spur-like branchlets of the previous year. They feature a hairy, four-lobed calyx; the four stamens are inserted opposite the lobes of calyx, on the margin of a thin disk. Pistillate flowers are borne in a dense spherical many-flowered head which appears on a short stout peduncle from the axils of the current year's growth. Each flower has a hairy four-lobed calyx with thick, concave lobes that invest the ovary and enclose the fruit. Ovaries are superior, ovate, compressed, green, and crowned by a long slender style covered with white stigmatic hairs. The ovule is solitary.
The mature multiple fruit's size and general appearance resembles a large, yellow-green orange (the fruit), about in diameter, with a roughened and tuberculated surface. The compound (or multiple) fruit is a syncarp of numerous small drupes, in which the carpels (ovaries) have grown together; thus, it is classified a multiple-accessory fruit. Each small drupe is oblong, compressed and rounded; they contain a milky latex which oozes when the fruit is damaged or cut. The seeds are oblong. Although the flowering is dioecious, the pistillate tree when isolated will still bear large oranges, perfect to the sight but lacking the seeds. The fruit has a cucumber-like flavor.
Distribution
Osage orange's pre-Columbian range was largely restricted to a small area in what is now the United States, namely the Red River drainage of Oklahoma, Texas, and Arkansas, as well as the Blackland Prairies and post oak savannas. A disjunct population also occurred in the Chisos Mountains of Texas. It has since become widely naturalized in the United States and Ontario, Canada. Osage orange has been planted in all the 48 contiguous states of the United States and in southeastern Canada.
The largest known Osage orange tree is located at the Patrick Henry National Memorial, in Brookneal, Virginia, and is believed to be almost 350 years old. Another historic tree is located on the grounds of Fort Harrod, a Kentucky pioneer settlement in Harrodsburg, Kentucky.
Ecological aspects of historical distribution
Because of the limited original range and lack of obvious effective means of propagation, the Osage orange has been the subject of controversial claims by some authors to be an evolutionary anachronism, whereby one or more now extinct Pleistocene megafauna, such as ground sloths, mammoths, mastodons or gomphotheres, fed on the fruit and aided in seed dispersal. An equine species that became extinct at the same time also has been suggested as the plant's original dispersal agent because modern horses and other livestock will sometimes eat the fruit. This hypothesis is controversial. For example, a 2015 study indicated that Osage orange seeds are not effectively spread by extant horse or elephant species, while a 2018 study concludes that squirrels are ineffective, short-distance seed dispersers. The claim has been criticised as a "just-so story" that lacks any empirical evidence.
The fruit is not poisonous to humans or livestock, but is not preferred by them, because it is mostly inedible due to a large size (about the diameter of a softball) and hard, dry texture. The edible seeds of the fruit are used by squirrels as food. Large animals such as livestock, which typically would consume fruits and disperse seeds, mainly ignore the fruit.
Ecology
The fruits are consumed by black-tailed deer in Texas, and white-tailed deer and fox squirrels in the Midwest. Crossbills are said to peck the seeds out. Loggerhead shrikes, a declining species in much of North America, use the tree for nesting and cache prey items upon its thorns.
Cultivation
Maclura pomifera prefers a deep and fertile soil, but is hardy over most of the contiguous United States, where it is used as a hedge. It must be regularly pruned to keep it in bounds, and the shoots of a single year will grow long, making it suitable for coppicing. A neglected hedge will become fruit-bearing. It is remarkably free from insect predators and fungal diseases. A thornless male cultivar of the species exists and is vegetatively reproduced for ornamental use. M. pomifera is cultivated in Italy, the former Yugoslavia, Romania, former USSR, and India.
Chemistry
Osajin and pomiferin are isoflavones present in the wood and fruit in an approximately 1:2 ratio by weight, and in turn comprise 4–6% of the weight of dry fruit and wood samples. Primary components of fresh fruit include pectin (46%), resin (17%), fat (5%), and sugar (before hydrolysis, 5%). The moisture content of fresh fruits is about 80%.
Uses
The Osage orange is commonly used as a tree row windbreak in prairie states, which gives it one of its colloquial names, "hedge apple". It was one of the primary trees used in President Franklin Delano Roosevelt's "Great Plains Shelterbelt" WPA project, which was launched in 1934 as an ambitious plan to modify weather and prevent soil erosion in the Great Plains states; by 1942 it resulted in the planting of 30,233 shelterbelts containing 220 million trees that stretched for . The sharp-thorned trees were also planted as cattle-deterring hedges before the introduction of barbed wire and afterward became an important source of fence posts. In 2001, its wood was used in the construction in Chestertown, Maryland of the schooner Sultana, a replica of .
The heavy, close-grained yellow-orange wood is dense and prized for tool handles, treenails, fence posts, and other applications requiring a strong, dimensionally stable wood that withstands rot. Although its wood is commonly knotty and twisted, straight-grained Osage orange timber makes good bows, as used by Native Americans. John Bradbury, a Scottish botanist who had traveled the interior United States extensively in the early 19th century, reported that a bow made of Osage timber could be traded for a horse and a blanket. Additionally, a yellow-orange dye can be extracted from the wood, which can be used as a substitute for fustic and aniline dyes. At present, florists use the fruits of M. pomifera for decorative purposes.
When dried, the wood has the highest heating value of any commonly available North American wood, and burns long and hot.
Osage orange wood is more rot-resistant than most, making good fence posts. They are generally set up green because the dried wood is too hard to reliably accept the staples used to attach the fencing to the posts. Palmer and Fowler's Fieldbook of Natural History 2nd edition rates Osage orange wood as being at least twice as hard and strong as white oak (Quercus alba). Its dense grain structure makes for good tonal properties. Production of woodwind instruments and waterfowl game calls are common uses for the wood.
Compounds extracted from the fruit, when concentrated, may repel insects. However, the naturally occurring concentrations of these compounds in the fruit are too low to make the fruit an effective insect repellent. In 2004, the EPA insisted that a website selling M. pomifera fruits online remove any mention of their supposed repellent properties as false advertising.
Traditional medicine
The Comanche formerly used a decoction of the roots topically as a wash to treat sore eyes.
| Biology and health sciences | Rosales | Plants |
58880 | https://en.wikipedia.org/wiki/Sugar%20substitute | Sugar substitute | A sugar substitute is a food additive that provides a sweetness like that of sugar while containing significantly less food energy than sugar-based sweeteners, making it a zero-calorie () or low-calorie sweetener. Artificial sweeteners may be derived through manufacturing of plant extracts or processed by chemical synthesis. Sugar substitute products are commercially available in various forms, such as small pills, powders and packets.
Common sugar substitutes include aspartame, monk fruit extract, saccharin, sucralose, stevia, acesulfame potassium (ace-K) and cyclamate. These sweeteners are a fundamental ingredient in diet drinks to sweeten them without adding calories. Additionally, sugar alcohols such as erythritol, xylitol and sorbitol are derived from sugars.
No links have been found between approved artificial sweeteners and cancer in humans. Reviews and dietetic professionals have concluded that moderate use of non-nutritive sweeteners as a safe replacement for sugars can help limit energy intake and assist with managing blood glucose and weight.
Description
A sugar substitute is a food additive that provides a sweetness like that of sugar while containing significantly less food energy than sugar-based sweeteners, making it a zero-calorie () or low-calorie sweetener. Sugar substitute products are commercially available in various forms, such as small pills, powders and packets.
Types
Artificial sweeteners may be derived through manufacturing of plant extracts or processed by chemical synthesis.
High-intensity sweeteners—one type of sugar substitute—are compounds with many times the sweetness of sucrose (common table sugar). As a result, much less sweetener is required and energy contribution is often negligible. The sensation of sweetness caused by these compounds is sometimes notably different from sucrose, so they are often used in complex mixtures that achieve the most intense sweet sensation.
In North America, common sugar substitutes include aspartame, monk fruit extract, saccharin, sucralose and stevia. Cyclamate is prohibited from being used as a sweetener within the United States, but is allowed in other parts of the world.
Sorbitol, xylitol and lactitol are examples of sugar alcohols (also known as polyols). These are, in general, less sweet than sucrose but have similar bulk properties and can be used in a wide range of food products. Sometimes the sweetness profile is fine-tuned by mixing with high-intensity sweeteners.
Allulose
Allulose is a sweetener in the sugar family, with a chemical structure similar to fructose. It is naturally found in figs, maple syrup and some fruit. While it comes from the same family as other sugars, it does not substantially metabolize as sugar in the body. The FDA recognizes that allulose does not act like sugar, and as of 2019, no longer requires it to be listed with sugars on U.S. nutrition labels. Allulose is about 70% as sweet as sugar, which is why it is sometimes combined with high-intensity sweeteners to make sugar substitutes.
Acesulfame potassium
Acesulfame potassium (Ace-K) is 200 times sweeter than sucrose (common sugar), as sweet as aspartame, about two-thirds as sweet as saccharin, and one-third as sweet as sucralose. Like saccharin, it has a slightly bitter aftertaste, especially at high concentrations. Kraft Foods has patented the use of sodium ferulate to mask acesulfame's aftertaste. Acesulfame potassium is often blended with other sweeteners (usually aspartame or sucralose), which give a more sucrose-like taste, whereby each sweetener masks the other's aftertaste and also exhibits a synergistic effect in which the blend is sweeter than its components.
Unlike aspartame, acesulfame potassium is stable under heat, even under moderately acidic or basic conditions, allowing it to be used as a food additive in baking or in products that require a long shelf life. In carbonated drinks, it is almost always used in conjunction with another sweetener, such as aspartame or sucralose. It is also used as a sweetener in protein shakes and pharmaceutical products, especially chewable and liquid medications, where it can make the active ingredients more palatable.
Aspartame
Aspartame was discovered in 1965 by James M. Schlatter at the G.D. Searle company. He was working on an anti-ulcer drug and accidentally spilled some aspartame on his hand. When he licked his finger, he noticed that it had a sweet taste. Torunn Atteraas Garin oversaw the development of aspartame as an artificial sweetener. It is an odorless, white crystalline powder that is derived from the two amino acids aspartic acid and phenylalanine. It is about 180–200 times sweeter than sugar, and can be used as a tabletop sweetener or in frozen desserts, gelatins, beverages and chewing gum. When cooked or stored at high temperatures, aspartame breaks down into its constituent amino acids. This makes aspartame undesirable as a baking sweetener. It is more stable in somewhat acidic conditions, such as in soft drinks. Though it does not have a bitter aftertaste like saccharin, it may not taste exactly like sugar. When eaten, aspartame is metabolized into its original amino acids. Because it is so intensely sweet, relatively little of it is needed to sweeten a food product, and is thus useful for reducing the number of calories in a product.
The safety of aspartame has been studied extensively since its discovery with research that includes animal studies, clinical and epidemiological research, and postmarketing surveillance, with aspartame being a rigorously tested food ingredient. Although aspartame has been subject to claims against its safety, multiple authoritative reviews have found it to be safe for consumption at typical levels used in food manufacturing. Aspartame has been deemed safe for human consumption by over 100 regulatory agencies in their respective countries, including the UK Food Standards Agency, the European Food Safety Authority (EFSA), and Health Canada.
Cyclamate
In the United States, the Food and Drug Administration banned the sale of cyclamate in 1969 after lab tests in rats involving a 10:1 mixture of cyclamate and saccharin (at levels comparable to humans ingesting 550 cans of diet soda per day) caused bladder cancer. This information, however, is regarded as "weak" evidence of carcinogenic activity, and cyclamate remains in common use in many parts of the world, including Canada, the European Union and Russia.
Mogrosides (monk fruit)
Mogrosides, extracted from monk fruit (which is commonly also called ), are recognized as safe for human consumption and are used in commercial products worldwide. As of 2017, it is not a permitted sweetener in the European Union, although it is allowed as a flavor at concentrations where it does not function as a sweetener. In 2017, a Chinese company requested a scientific review of its mogroside product by the European Food Safety Authority. It is the basis of McNeil Nutritionals's tabletop sweetener Nectresse in the United States and Norbu Sweetener in Australia.
Saccharin
Apart from sugar of lead (used as a sweetener in ancient through medieval times before the toxicity of lead was known), saccharin was the first artificial sweetener and was originally synthesized in 1879 by Remsen and Fahlberg. Its sweet taste was discovered by accident. It had been created in an experiment with toluene derivatives. A process for the creation of saccharin from phthalic anhydride was developed in 1950, and, currently, saccharin is created by this process as well as the original process by which it was discovered. It is 300 to 500 times sweeter than sucrose and is often used to improve the taste of toothpastes, dietary foods and dietary beverages. The bitter aftertaste of saccharin is often minimized by blending it with other sweeteners.
Fear about saccharin increased when a 1960 study showed that high levels of saccharin may cause bladder cancer in laboratory rats. In 1977, Canada banned saccharin as a result of the animal research. In the United States, the FDA considered banning saccharin in 1977, but Congress stepped in and placed a moratorium on such a ban. The moratorium required a warning label and also mandated further study of saccharin safety.
Subsequently, it was discovered that saccharin causes cancer in male rats by a mechanism not found in humans. At high doses, saccharin causes a precipitate to form in rat urine. This precipitate damages the cells lining the bladder (urinary bladder urothelial cytotoxicity) and a tumor forms when the cells regenerate (regenerative hyperplasia). According to the International Agency for Research on Cancer, part of the World Health Organization, "This mechanism is not relevant to humans because of critical interspecies differences in urine composition".
In 2001, the United States repealed the warning label requirement, while the threat of an FDA ban had already been lifted in 1991. Most other countries also permit saccharin, but restrict the levels of use, while other countries have outright banned it.
The EPA has removed saccharin and its salts from their list of hazardous constituents and commercial chemical products. In a 14 December 2010 release, the EPA stated that saccharin is no longer considered a potential hazard to human health.
Steviol glycosides (stevia)
Stevia is a natural non-caloric sweetener derived from the Stevia rebaudiana plant, and is manufactured as a sweetener. It is indigenous to South America, and has historically been used in Japanese food products, although it is now common internationally. In 1987, the FDA issued a ban on stevia because it had not been approved as a food additive, although it continued to be available as a dietary supplement. After being provided with sufficient scientific data demonstrating safety of using stevia as a manufactured sweetener, from companies such as Cargill and Coca-Cola, the FDA gave a "no objection" status as generally recognized as safe (GRAS) in December 2008 to Cargill for its stevia product, Truvia, for use of the refined stevia extracts as a blend of rebaudioside A and erythritol. In Australia, the brand Vitarium uses Natvia, a stevia sweetener, in a range of sugar-free children's milk mixes.
In August 2019, the FDA placed an import alert on stevia leaves and crude extracts—which do not have GRAS status—and on foods or dietary supplements containing them, citing concerns about safety and potential for toxicity.
Sucralose
The world's most commonly used artificial sweetener, sucralose is a chlorinated sugar that is about 600 times sweeter than sugar. It is produced from sucrose when three chlorine atoms replace three hydroxyl groups. It is used in beverages, frozen desserts, chewing gum, baked goods and other foods. Unlike other artificial sweeteners, it is stable when heated and can therefore be used in baked and fried goods. Discovered in 1976, the FDA approved sucralose for use in 1998.
Most of the controversy surrounding Splenda, a sucralose sweetener, is focused not on safety but on its marketing. It has been marketed with the slogan, "Splenda is made from sugar, so it tastes like sugar." Sucralose is prepared from either of two sugars, sucrose or raffinose. With either base sugar, processing replaces three oxygen-hydrogen groups in the sugar molecule with three chlorine atoms.
The "Truth About Splenda" website was created in 2005 by the Sugar Association, an organization representing sugar beet and sugar cane farmers in the United States, to provide its view of sucralose. In December 2004, five separate false-advertising claims were filed by the Sugar Association against Splenda manufacturers Merisant and McNeil Nutritionals for claims made about Splenda related to the slogan, "Made from sugar, so it tastes like sugar." French courts ordered the slogan to no longer be used in France, while in the U.S., the case came to an undisclosed settlement during the trial.
There are few safety concerns pertaining to sucralose and the way sucralose is metabolized suggests a reduced risk of toxicity. For example, sucralose is extremely insoluble in fat and, thus, does not accumulate in fatty tissues; sucralose also does not break down and will dechlorinate only under conditions that are not found during regular digestion (i.e., high heat applied to the powder form of the molecule). Only about 15% of sucralose is absorbed by the body and most of it passes out of the body unchanged.
In 2017, sucralose was the most common sugar substitute used in the manufacture of foods and beverages; it had 30% of the global market, which was projected to be valued at $2.8 billion by 2021.
Sugar alcohol
Sugar alcohols, or polyols, are sweetening and bulking ingredients used in the manufacturing of foods and beverages, particularly sugar-free candies, cookies and chewing gums. As a sugar substitute, they typically are less-sweet and supply fewer calories (about a half to one-third fewer calories) than sugar. They are converted to glucose slowly, and do not spike increases in blood glucose.
Sorbitol, xylitol, mannitol, erythritol and lactitol are examples of sugar alcohols. These are, in general, less sweet than sucrose, but have similar bulk properties and can be used in a wide range of food products. The sweetness profile may be altered during manufacturing by mixing with high-intensity sweeteners.
Sugar alcohols are carbohydrates with a biochemical structure partially matching the structures of sugar and alcohol, although not containing ethanol. They are not entirely metabolized by the human body. The unabsorbed sugar alcohols may cause bloating and diarrhea due to their osmotic effect, if consumed in sufficient amounts. They are found commonly in small quantities in some fruits and vegetables, and are commercially manufactured from different carbohydrates and starch.
Production
The majority of sugar substitutes approved for food use are artificially synthesized compounds. However, some bulk plant-derived sugar substitutes are known, including sorbitol, xylitol and lactitol. As it is not commercially profitable to extract these products from fruits and vegetables, they are produced by catalytic hydrogenation of the appropriate reducing sugar. For example, xylose is converted to xylitol, lactose to lactitol, and glucose to sorbitol.
Use
Reasons for use
Sugar substitutes are used instead of sugar for a number of reasons, including:
Dental care
Carbohydrates and sugars usually adhere to the tooth enamel, where bacteria feed upon them and quickly multiply. The bacteria convert the sugar to acids that decay the teeth. Sugar substitutes, unlike sugar, do not erode teeth as they are not fermented by the microflora of the dental plaque. A sweetener that may benefit dental health is xylitol, which tends to prevent bacteria from adhering to the tooth surface, thus preventing plaque formation and eventually tooth decay. A Cochrane review, however, found only low-quality evidence that xylitol in a variety of dental products actually has any benefit in preventing tooth decay in adults and children.
Dietary concerns
Sugar substitutes are a fundamental ingredient in diet drinks to sweeten them without adding calories. Additionally, sugar alcohols such as erythritol, xylitol and sorbitol are derived from sugars. In the United States, six high-intensity sugar substitutes have been approved for use: aspartame, sucralose, neotame, acesulfame potassium (Ace-K), saccharin and advantame. Food additives must be approved by the FDA, and sweeteners must be proven as safe via submission by a manufacturer of a GRAS document. The conclusions about GRAS are based on a detailed review of a large body of information, including rigorous toxicological and clinical studies. GRAS notices exist for two plant-based, high-intensity sweeteners: steviol glycosides obtained from stevia leaves (Stevia rebaudiana) and extracts from Siraitia grosvenorii, also called luo han guo or monk fruit.
Glucose metabolism
Diabetes mellitus – People with diabetes limit refined sugar intake to regulate their blood sugar levels. Many artificial sweeteners allow sweet-tasting food without increasing blood glucose. Others do release energy but are metabolized more slowly, preventing spikes in blood glucose. A concern, however, is that overconsumption of foods and beverages made more appealing with sugar substitutes may increase risk of developing diabetes. A 2014 systematic review showed that a 330ml/day (an amount little less than the standard U.S can size) consumption of artificially sweetened beverages lead to increased risks of type 2 diabetes. A 2015 meta-analysis of numerous clinical studies showed that habitual consumption of sugar sweetened beverages, artificially sweetened beverages, and fruit juice increased the risk of developing diabetes, although with inconsistent results and generally low quality of evidence. A 2016 review described the relationship between non-nutritive sweeteners as inconclusive. A 2020 Cochrane systematic review compared several non-nutritive sweeteners to sugar, placebo and a nutritive low-calorie sweetener (tagatose), but the results were unclear for effects on HbA1c, body weight and adverse events. The studies included were mainly of very low certainty and did not report on health-related quality of life, diabetes complications, all-cause mortality or socioeconomic effects.
Reactive hypoglycemia – Individuals with reactive hypoglycemia will produce an excess of insulin after quickly absorbing glucose into the bloodstream. This causes their blood glucose levels to fall below the amount needed for proper body and brain function. As a result, like diabetics, they must avoid intake of high-glycemic foods like white bread, and often use artificial sweeteners for sweetness without blood glucose.
Cost and shelf life
Many sugar substitutes are cheaper than sugar in the final food formulation. Sugar substitutes are often lower in total cost because of their long shelf life and high sweetening intensity. This allows sugar substitutes to be used in products that will not perish after a short period of time.
Acceptable daily intake levels
In the United States, the FDA provides guidance for manufacturers and consumers about the daily limits for consuming high-intensity sweeteners, a measure called acceptable daily intake (ADI). During their premarket review for all of the high-intensity sweeteners approved as food additives, the FDA established an ADI defined as an amount in milligrams per kilogram of body weight per day (mg/kg bw/d), indicating that a high-intensity sweetener does not cause safety concerns if estimated daily intakes are lower than the ADI. The FDA states: "An ADI is the amount of a substance that is considered safe to consume each day over the course of a person's lifetime." For stevia (specifically, steviol glycosides), an ADI was not derived by the FDA, but by the Joint Food and Agricultural Organization/World Health Organization Expert Committee on Food Additives, whereas an ADI has not been determined for monk fruit.
For the sweeteners approved as food additives, the ADIs in milligrams per kilogram of body weight per day are:
Acesulfame potassium, ADI 15
Advantame, ADI 32.8
Aspartame, ADI 50
Neotame, ADI 0.3
Saccharin, ADI 15
Sucralose, ADI 5
Stevia (pure extracted steviol glycosides), ADI 4
Monk fruit extract, no ADI determined
Mouthfeel
If the sucrose, or other sugar, that is replaced has contributed to the texture of the product, then a bulking agent is often also needed. This may be seen in soft drinks or sweet teas that are labeled as "diet" or "light" that contain artificial sweeteners and often have notably different mouthfeel, or in table sugar replacements that mix maltodextrins with an intense sweetener to achieve satisfactory texture sensation.
Sweetness intensity
The FDA has published estimates of sweetness intensity, called a multiplier of sweetness intensity (MSI) as compared to table sugar.
Plant-derived
The sweetness levels and energy densities are in comparison to those of sucrose.
Artificial
Sugar alcohols
Research
Body weight
Reviews and dietetic professionals have concluded that moderate use of non-nutritive sweeteners as a safe replacement for sugars may help limit energy intake and assist with managing blood glucose and weight. Other reviews found that the association between body weight and non-nutritive sweetener usage is inconclusive. Observational studies tend to show a relation with increased body weight, while randomized controlled trials instead show a little causal weight loss. Other reviews concluded that use of non-nutritive sweeteners instead of sugar reduces body weight.
Obesity
There is little evidence that artificial sweeteners directly affect the onset and mechanisms of obesity, although consuming sweetened products is associated with weight gain in children. Some preliminary studies indicate that consumption of products manufactured with artificial sweeteners is associated with obesity and metabolic syndrome, decreased satiety, disturbed glucose metabolism, and weight gain, mainly due to increased overall calorie intake, although the numerous factors influencing obesity remain poorly studied, as of 2021.
Cancer
Multiple reviews have found no link between artificial sweeteners and the risk of cancer. FDA scientists have reviewed scientific data regarding the safety of aspartame and different sweeteners in food, concluding that they are safe for the general population under common intake conditions.
Mortality
High consumption of artificially sweetened beverages was associated with a 12% higher risk of all-cause mortality and a 23% higher risk of cardiovascular disease (CVD) mortality in a 2021 meta-analysis. A 2020 meta-analysis found a similar result, with the highest consuming group having a 13% higher risk of all-cause mortality and a 25% higher risk of CVD mortality. However, both studies also found similar or greater increases in all-cause mortality when consuming the same amount of sugar-sweetened beverages.
Non-nutritive sweeteners vs sugar
The World Health Organization does not recommend using non-nutritive sweeteners to control body weight, based on a 2022 review that could only find small reductions in body fat and no effect on cardiometabolic risk. It recommends fruit or non-sweetened foods instead.
| Technology | Food, water and health | null |
58883 | https://en.wikipedia.org/wiki/Cedrus | Cedrus | Cedrus, with the common English name cedar, is a genus of coniferous trees in the plant family Pinaceae (subfamily Abietoideae). They are native to the mountains of the western Himalayas and the Mediterranean region, occurring at altitudes of in the Himalayas and in the Mediterranean.
Description
Cedrus trees can grow up to 30–40 m (occasionally 60 m) tall with spicy-resinous scented wood, thick ridged or square-cracked bark, and broad, level branches. The shoots are dimorphic and are made up of long shoots, which form the framework of the branches, and short shoots, which carry most of the leaves. The leaves are evergreen and needle-like, 8–60 mm long, arranged in an open spiral phyllotaxis on long shoots, and in dense spiral clusters of 15–45 together on short shoots; they vary from bright grass-green to dark green to strongly glaucous pale blue-green, depending on the thickness of the white wax layer which protects the leaves from desiccation. The seed cones are barrel-shaped, 6–12 cm long and 3–8 cm broad, green maturing grey-brown, and, as in Abies, disintegrate at maturity to release the winged seeds. The seeds are 10–15 mm long, with a 20–30 mm wing; as in Abies, the seeds have two or three resin blisters, containing an unpleasant-tasting resin, thought to be a defence against squirrel predation. Cone maturation takes one year, with pollination in autumn and the seeds maturing at the same time a year later. The pollen cones are slender ovoid, 3–8 cm long, produced in late summer, and shed pollen in autumn.
Classification
Cedars share a very similar cone structure with the firs (Abies) and were traditionally thought to be most closely related to them, but molecular evidence supports a basal position in the family.
Taxonomy
The five taxa of Cedrus are assigned according to taxonomic opinion to between one and four species: The oldest known fossil of Cedrus is Cedrus penzhinaensis known from fossil wood found in Early Cretaceous (Albian) sediments of Kamchatka, Russia.
Ecology
Cedars are adapted to mountainous climates; in the Mediterranean, they receive winter precipitation, mainly as snow, and summer drought, while in the western Himalaya, they receive primarily summer monsoon rainfall and occasional winter snowfall. While no members of Cedrus are native to the Americas, members of Juniperus and Cupressaceae are native and are called by the common name of "cedar".
Cedars are used as food plants by the larvae of some Lepidoptera species including pine processionary and turnip moth (recorded on deodar cedar).
Use
Cedars are very popular ornamental trees, and are often cultivated in temperate climates where winter temperatures do not fall below circa −25 °C. The Turkish cedar is slightly hardier, to −30 °C or just below. Extensive mortality of planted specimens can occur in severe winters when temperatures fall lower. Locales with successful longaeval cultivation include the Mediterranean region, Western Europe north to the British Isles, southern Australia and New Zealand, and southern and western North America.
Cedar wood and cedarwood oil are natural repellents to moths, hence cedar is a popular lining for cedar chests and closets in which woolens are stored. This specific use of cedar is mentioned in The Iliad, Book 24, referring to the cedar-roofed or lined storage chamber where Priam went to fetch treasures to be used as ransom. The ancients made cedarwood oil from Lebanon cedar, a true cedar of the genus Cedrus, However, the species used for modern cedar chests and closets in North America is Juniperus virginiana, and cedarwood oil is now typically derived from various junipers and cypresses (of the family Cupressaceae). Cedar is also commonly used to make shoe trees because it can absorb moisture and deodorize.
Many species of cedar are suitable for training as bonsai. They work well for many styles, including formal and informal upright, slanting, and cascading.
Nomenclature
Some authorities consider Cedrus the only "true cedars" and discourage use of the name for other genera without an additional qualifier, such as "white-cedar". Nevertheless, the name "cedar" has been applied (since about 1700) to other trees, such as the North American Thuja plicata, commonly called "western red cedar", and Juniperus virginiana, commonly called "red cedar" or "eastern red cedar". In some cases, the botanical name alludes to this usage, such as the genus Calocedrus, meaning "beautiful cedar" (also known as "incense cedar"). Several species of genera Calocedrus, Thuja, and Chamaecyparis in the Pacific Northwest having similarly aromatic wood are referred to as "false cedars" . In Australia Toona ciliata has long been known as red cedar, in furniture and as the tree, even though it belongs to the Meliaceae or Mahogany family.
Etymology
Both the Latin word cedrus and the generic name cedrus are derived from Greek κέδρος kédros. Ancient Greek and Latin used the same word, kédros and cedrus, respectively, for different species of plants now classified in the genera Cedrus and Juniperus (juniper). Species of both genera are native to the area where Greek language and culture originated, though as the word kédros does not seem to be derived from any of the languages of the Middle East, it has been suggested the word may originally have applied to Greek species of juniper and was later adopted for species now classified in the genus Cedrus because of the similarity of their aromatic woods. The name was similarly applied to citron and the word citrus is derived from the same root. However, as a loan word in English, cedar had become fixed to its biblical sense of Cedrus by the time of its first recorded usage in AD 1000.
| Biology and health sciences | Gymnosperms | null |
58890 | https://en.wikipedia.org/wiki/Resin | Resin | A resin is a solid or highly viscous liquid that can be converted into a polymer. Resins may be biological or synthetic in origin, but are typically harvested from plants. Resins are mixtures of organic compounds, and predominantly terpenes. Well known resins include amber, hashish, frankincense, myrrh and the animal-derived resin, shellac. Resins are commonly used in varnishes, adhesives, food additives, incenses and perfumes.
Resins protect plants from insects and pathogens, and are secreted in response to injury. Resins confound a wide range of herbivores, insects, and pathogens, while the volatile phenolic compounds may attract benefactors such as predators of insects that attack the plant.
Composition
Most plant resins are composed of terpenes. Specific components are alpha-pinene, beta-pinene, delta-3 carene, and sabinene, the monocyclic terpenes limonene and terpinolene, and smaller amounts of the tricyclic sesquiterpenes, longifolene, caryophyllene, and delta-cadinene. Some resins also contain a high proportion of resin acids. Rosins on the other hand are less volatile and consist of diterpenes among other compounds.
Examples
Examples of plant resins include amber, Balm of Gilead, balsam, Canada balsam, copal from trees of Protium copal and Hymenaea courbaril, dammar gum from trees of the family Dipterocarpaceae, dragon's blood from the dragon trees (Dracaena species), elemi, frankincense from Boswellia sacra, galbanum from Ferula gummosa, gum guaiacum from the lignum vitae trees of the genus Guaiacum, kauri gum from trees of Agathis australis, hashish (Cannabis resin) from Cannabis indica, labdanum from mediterranean species of Cistus, mastic (plant resin) from the mastic tree Pistacia lentiscus, myrrh from shrubs of Commiphora, sandarac resin from Tetraclinis articulata, the national tree of Malta, styrax (a Benzoin resin from various Styrax species) and spinifex resin from Australian grasses.
Amber is fossil resin (also called resinite) from coniferous and other tree species. Copal, kauri gum, dammar and other resins may also be found as subfossil deposits. Subfossil copal can be distinguished from genuine fossil amber because it becomes tacky when a drop of a solvent such as acetone or chloroform is placed on it.
African copal and the kauri gum of New Zealand are also procured in a semi-fossil condition.
Rosin
Rosin is a solidified resin from which the volatile terpenes have been removed by distillation. Typical rosin is a transparent or translucent mass, with a vitreous fracture and a faintly yellow or brown colour, non-odorous or having only a slight turpentine odour and taste. Rosin is insoluble in water, mostly soluble in alcohol, essential oils, ether, and hot fatty oils. Rosin softens and melts when heated and burns with a bright but smoky flame.
Rosin consists of a complex mixture of different substances including organic acids named the resin acids. Related to the terpenes, resin acid is oxidized terpenes. Resin acids dissolve in alkalis to form resin soaps, from which the resin acids are regenerated upon treatment with acids. Examples of resin acids are abietic acid (sylvic acid), C20H30O2, plicatic acid contained in cedar, and pimaric acid, C20H30O2, a constituent of galipot resin. Abietic acid can also be extracted from rosin by means of hot alcohol.
Rosin is obtained from pines and some other plants, mostly conifers. Plant resins are generally produced as stem secretions, but in some Central and South American species of Dalechampia and Clusia they are produced as pollination rewards, and used by some stingless bee species in nest construction. Propolis, consisting largely of resins collected from plants such as poplars and conifers, is used by honey bees to seal small gaps in their hives, while larger gaps are filled with beeswax.
Petroleum- and insect-derived resins
Shellac is an example of an insect-derived resin.
Asphaltite and Utah resin are petroleum bitumens.
History and etymology
Human use of plant resins has a very long history that was documented in ancient Greece by Theophrastus, in ancient Rome by Pliny the Elder, and especially in the resins known as frankincense and myrrh, prized in ancient Egypt. These were highly prized substances, and required as incense in some religious rites.
The word resin comes from French resine, from Latin resina "resin", which either derives from or is a cognate of the Greek rhētínē "resin of the pine", of unknown earlier origin, though probably non-Indo-European.
The word "resin" has been applied in the modern world to nearly any component of a liquid that will set into a hard lacquer or enamel-like finish. An example is nail polish. Certain "casting resins" and synthetic resins (such as epoxy resin) have also been given the name "resin".
Some naturally-derived resins, when soft, are known as 'oleoresins', and when containing benzoic acid or cinnamic acid they are called balsams. Oleoresins are naturally-occurring mixtures of an oil and a resin; they can be extracted from various plants. Other resinous products in their natural condition are a mix with gum or mucilaginous substances and known as gum resins. Several natural resins are used as ingredients in perfumes, e.g., balsams of Peru and tolu, elemi, styrax, and certain turpentines.
Non-resinous exudates
Other liquid compounds found inside plants or exuded by plants, such as sap, latex, or mucilage, are sometimes confused with resin but are not the same. Saps, in particular, serve a nutritive function that resins do not.
Uses
Plant resins
Plant resins are valued for the production of varnishes, adhesives, and food glazing agents. They are also prized as raw materials for the synthesis of other organic compounds and provide constituents of incense and perfume. The oldest known use of plant resin comes from the late Middle Stone Age in Southern Africa where it was used as an adhesive for hafting stone tools.
The hard transparent resins, such as the copals, dammars, mastic, and sandarac, are principally used for varnishes and adhesives, while the softer odoriferous oleo-resins (frankincense, elemi, turpentine, copaiba), and gum resins containing essential oils (ammoniacum, asafoetida, gamboge, myrrh, and scammony) are more used for therapeutic purposes, food and incense. The resin of the Aleppo Pine is used to flavour retsina, a Greek resinated wine.
Animal resins
While animal resins are not as common as either plant or synthetic resins some animal resins like lac (obtained from Kerria lacca) are used for applications like sealing wax in India, and lacquerware in Sri Lanka.
Synthetic resins
Many materials are produced via the conversion of synthetic resins to solids. Important examples are bisphenol A diglycidyl ether, which is a resin converted to epoxy glue upon the addition of a hardener. Silicones are often prepared from silicone resins via room temperature vulcanization. Alkyd resins are used in paints and varnishes and harden or cure by exposure to oxygen in the air.
| Physical sciences | Terpenes and terpenoids | Chemistry |
58900 | https://en.wikipedia.org/wiki/Unmanned%20aerial%20vehicle | Unmanned aerial vehicle | An unmanned aerial vehicle (UAV), or unmanned aircraft system (UAS), commonly known as a drone, is an aircraft with no human pilot, crew, or passengers onboard. UAVs were originally developed through the twentieth century for military missions too "dull, dirty or dangerous" for humans, and by the twenty-first, they had become essential assets to most militaries. As control technologies improved and costs fell, their use expanded to many non-military applications. These include aerial photography, area coverage, precision agriculture, forest fire monitoring, river monitoring, environmental monitoring, weather observation, policing and surveillance, infrastructure inspections, smuggling, product deliveries, entertainment, and drone racing.
Terminology
Many terms are used for aircraft which fly without any persons onboard.
The term drone has been used from the early days of aviation, some being applied to remotely flown target aircraft used for practice firing of a battleship's guns, such as the 1920s Fairey Queen and 1930s de Havilland Queen Bee. Later examples included the Airspeed Queen Wasp and Miles Queen Martinet, before ultimate replacement by the GAF Jindivik. The term remains in common use. In addition to the software, autonomous drones also employ a host of advanced technologies that allow them to carry out their missions without human intervention, such as cloud computing, computer vision, artificial intelligence, machine learning, deep learning, and thermal sensors. For recreational uses, an aerial photography drone is an aircraft that has first-person video, autonomous capabilities, or both.
An unmanned aerial vehicle (UAV) is defined as a "powered, aerial vehicle that does not carry a human operator, uses aerodynamic forces to provide vehicle lift, can fly autonomously or be piloted remotely, can be expendable or recoverable, and can carry a lethal or nonlethal payload". UAV is a term that is commonly applied to military use cases. Missiles with warheads are generally not considered UAVs because the vehicle itself is a munition, but certain types of propeller-based missile are often called "kamikaze drones" by the public and media. Also, the relation of UAVs to remote controlled model aircraft is unclear, UAVs may or may not include remote-controlled model aircraft. Some jurisdictions base their definition on size or weight; however, the US FAA defines any unmanned flying craft as a UAV regardless of size. A similar term is remotely piloted aerial vehicle (RPAV).
UAVs or RPAVs can also be seen as a component of an unmanned aircraft system (UAS), which also includes a ground-based controller and a system of communications with the aircraft.
The term UAS was adopted by the United States Department of Defense (DoD) and the United States Federal Aviation Administration (FAA) in 2005 according to their Unmanned Aircraft System Roadmap 2005–2030. The International Civil Aviation Organization (ICAO) and the British Civil Aviation Authority adopted this term, also used in the European Union's Single European Sky (SES) Air Traffic Management (ATM) Research (SESAR Joint Undertaking) roadmap for 2020. This term emphasizes the importance of elements other than the aircraft. It includes elements such as ground control stations, data links and other support equipment. Similar terms are unmanned aircraft vehicle system (UAVS) and remotely piloted aircraft system (RPAS). Many similar terms are in use. Under new regulations which came into effect 1 June 2019, the term RPAS has been adopted by the Canadian Government to mean "a set of configurable elements consisting of a remotely piloted aircraft, its control station, the command and control links and any other system elements required during flight operation".
Classification types
UAVs may be classified like any other aircraft, according to design configuration such as weight or engine type, maximum flight altitude, degree of operational autonomy, operational role, etc. According to the United States Department of Defense, UAVs are classified into five categories below:
Other classifications of UAVs include:
Range and endurance
There are usually five categories when UAVs are classified by range and endurance:
Size
There are usually four categories when UAVs are classified by size, with at least one of the dimensions (length or wingspan) meet the following respective limits:
Weight
Based on their weight, drones can be classified into five categories:
Degree of autonomy
Drones can also be classified based on the degree of autonomy in their flight operations. ICAO classifies unmanned aircraft as either remotely piloted aircraft or fully autonomous. Some UAVs offer intermediate degrees of autonomy. For example, a vehicle may be remotely piloted in most contexts but have an autonomous return-to-base operation. Some aircraft types may optionally fly manned or as UAVs, which may include manned aircraft transformed into manned or Optionally Piloted UAVs (OPVs). The flight of UAVs may operate under remote control by a human operator, as remotely piloted aircraft (RPA), or with various degrees of autonomy, such as autopilot assistance, up to fully autonomous aircraft that have no provision for human intervention.
Altitude
Based on the altitude, the following UAV classifications have been used at industry events such as ParcAberporth Unmanned Systems forum:
Hand-held altitude, about 2 km range
Close altitude, up to 10 km range
NATO type altitude, up to 50 km range
Tactical altitude, about 160 km range
MALE (medium altitude, long endurance) up to and range over 200 km
HALE (high altitude, long endurance) over and indefinite range
Hypersonic high-speed, supersonic (Mach 1–5) or hypersonic (Mach 5+) or suborbital altitude, range over 200 km
Orbital low Earth orbit (Mach 25+)
CIS Lunar Earth-Moon transfer
Computer Assisted Carrier Guidance System (CACGS) for UAVs
Composite criteria
An example of classification based on the composite criteria is U.S. Military's unmanned aerial systems (UAS) classification of UAVs based on weight, maximum altitude and speed of the UAV component.
Power sources
UAVs can be classified based on their power or energy source, which significantly impacts their flight duration, range, and environmental impact. The main categories include:
Battery-powered (electric): These UAVs use rechargeable batteries, offering quiet operation and lower maintenance but potentially limited flight times. The reduced noise levels make them suitable for urban environments and sensitive operations.
Fuel-powered (internal combustion): Utilizing traditional fuels like gasoline or diesel, these UAVs often have longer flight times but may be noisier and require more maintenance. They are typically used for applications requiring extended endurance or heavy payload capacity.
Hybrid: Combining electric and fuel power sources, hybrid UAVs aim to balance the benefits of both systems for improved performance and efficiency. This configuration could allow for versatility in mission profiles and adaptability to different operational requirements.
Solar-powered: Equipped with solar panels, these UAVs can potentially achieve extended flight times by harnessing solar energy, especially at high altitudes. Solar-powered UAVs may be particularly suited for long-endurance missions and environmental monitoring applications.
Nuclear-powered: While nuclear power has been explored for larger aircraft, its application in UAVs remains largely theoretical due to safety concerns and regulatory challenges. Research in this area is ongoing but faces significant hurdles before practical implementation.
Hydrogen fuel cell: An emerging technology, hydrogen fuel cells offer the potential for longer flight times with zero emissions, though the technology is still developing for widespread UAV use. The high energy density of hydrogen makes it a promising option for future UAV propulsion systems.
History
Early drones
The earliest recorded use of an unmanned aerial vehicle for warfighting occurred in July 1849, with a balloon carrier (the precursor to the aircraft carrier) in the first offensive use of air power in naval aviation. Austrian forces besieging Venice attempted to launch some 200 incendiary balloons at the besieged city. The balloons were launched mainly from land; however, some were also launched from the Austrian ship . At least one bomb fell in the city; however, due to the wind changing after launch, most of the balloons missed their target, and some drifted back over Austrian lines and the launching ship Vulcano.
The Spanish engineer Leonardo Torres Quevedo introduced a radio-based control-system called the Telekino at the Paris Academy of Science in 1903, as a way of testing airships without risking human life.
Significant development of drones started in the 1900s, and originally focused on providing practice targets for training military personnel. The earliest attempt at a powered UAV was A. M. Low's "Aerial Target" in 1916. Low confirmed that Geoffrey de Havilland's monoplane was the one that flew under control on 21 March 1917 using his radio system. Following this successful demonstration in the spring of 1917 Low was transferred to develop aircraft controlled fast motor launches D.C.B.s with the Royal Navy in 1918 intended to attack shipping and port installations and he also assisted Wing Commander Brock in preparations for the Zeebrugge Raid. Other British unmanned developments followed, leading to the fleet of over 400 de Havilland 82 Queen Bee aerial targets that went into service in 1935.
Nikola Tesla described a fleet of uncrewed aerial combat vehicles in 1915. These developments also inspired the construction of the Kettering Bug by Charles Kettering from Dayton, Ohio and the Hewitt-Sperry Automatic Airplane – initially meant as an uncrewed plane that would carry an explosive payload to a predetermined target. Development continued during World War I, when the Dayton-Wright Airplane Company invented a pilotless aerial torpedo that would explode at a preset time.
The film star and model-airplane enthusiast Reginald Denny developed the first scaled remote piloted vehicle in 1935.
Soviet researchers experimented with controlling Tupolev TB-1 bombers remotely in the late 1930s.
World War II
In 1940, Denny started the Radioplane Company and more models emerged during World War II used both to train antiaircraft gunners and to fly attack-missions. Nazi Germany produced and used various UAV aircraft during the war, like the Argus As 292 and the V-1 flying bomb with a jet engine. Fascist Italy developed a specialised drone version of the Savoia-Marchetti SM.79 flown by remote control, although the Armistice with Italy was enacted prior to any operational deployment.
Postwar period
After World War II development continued in vehicles such as the American JB-4 (using television/radio-command guidance), the Australian GAF Jindivik and Teledyne Ryan Firebee I of 1951, while companies like Beechcraft offered their Model 1001 for the U.S. Navy in 1955. Nevertheless, they were little more than remote-controlled airplanes until the Vietnam War. In 1959, the U.S. Air Force, concerned about losing pilots over hostile territory, began planning for the use of uncrewed aircraft. Planning intensified after the Soviet Union shot down a U-2 in 1960. Within days, a highly classified UAV program started under the code name of "Red Wagon". The August 1964 clash in the Tonkin Gulf between naval units of the U.S. and the North Vietnamese Navy initiated America's highly classified UAVs (Ryan Model 147, Ryan AQM-91 Firefly, Lockheed D-21) into their first combat missions of the Vietnam War. When the Chinese government showed photographs of downed U.S. UAVs via Wide World Photos, the official U.S. response was "no comment".
During the War of Attrition (1967–1970) in the Middle East, Israeli intelligence tested the first tactical UAVs installed with reconnaissance cameras, which successfully returned photos from across the Suez Canal. This was the first time that tactical UAVs that could be launched and landed on any short runway (unlike the heavier jet-based UAVs) were developed and tested in battle.
In the 1973 Yom Kippur War, Israel used UAVs as decoys to spur opposing forces into wasting expensive anti-aircraft missiles. After the 1973 Yom Kippur War, a few key people from the team that developed this early UAV joined a small startup company that aimed to develop UAVs into a commercial product, eventually purchased by Tadiran and leading to the development of the first Israeli UAV.
In 1973, the U.S. military officially confirmed that they had been using UAVs in Southeast Asia (Vietnam). Over 5,000 U.S. airmen had been killed and over 1,000 more were missing or captured. The USAF 100th Strategic Reconnaissance Wing flew about 3,435 UAV missions during the war at a cost of about 554 UAVs lost to all causes. In the words of USAF General George S. Brown, Commander, Air Force Systems Command, in 1972, "The only reason we need (UAVs) is that we don't want to needlessly expend the man in the cockpit." Later that year, General John C. Meyer, Commander in Chief, Strategic Air Command, stated, "we let the drone do the high-risk flying ... the loss rate is high, but we are willing to risk more of them ...they save lives!"
During the 1973 Yom Kippur War, Soviet-supplied surface-to-air missile-batteries in Egypt and Syria caused heavy damage to Israeli fighter jets. As a result, Israel developed the IAI Scout as the first UAV with real-time surveillance. The images and radar decoys provided by these UAVs helped Israel to completely neutralize the Syrian air defenses at the start of the 1982 Lebanon War, resulting in no pilots downed. In Israel in 1987, UAVs were first used as proof-of-concept of super-agility, post-stall controlled flight in combat-flight simulations that involved tailless, stealth-technology-based, three-dimensional thrust vectoring flight-control, and jet-steering.
Modern UAVs
With the maturing and miniaturization of applicable technologies in the 1980s and 1990s, interest in UAVs grew within the higher echelons of the U.S. military. The U.S. funded the Counterterrorism Center (CTC) within the CIA, which sought to fight terrorism with the aid of modernized drone technology. In the 1990s, the U.S. DoD gave a contract to AAI Corporation along with Israeli company Malat. The U.S. Navy bought the AAI Pioneer UAV that AAI and Malat developed jointly. Many of these UAVs saw service in the 1991 Gulf War. UAVs demonstrated the possibility of cheaper, more capable fighting-machines, deployable without risk to aircrews. Initial generations primarily involved surveillance aircraft, but some carried armaments, such as the General Atomics MQ-1 Predator, that launched AGM-114 Hellfire air-to-ground missiles.
CAPECON, a European Union project to develop UAVs, ran from 1 May 2002 to 31 December 2005.
, the United States Air Force (USAF) employed 7,494 UAVs almost one in three USAF aircraft. The Central Intelligence Agency also operated UAVs. By 2013 at least 50 countries used UAVs. China, Iran, Israel, Pakistan, Turkey, and others designed and built their own varieties. The use of drones has continued to increase. Due to their wide proliferation, no comprehensive list of UAV systems exists.
The development of smart technologies and improved electrical-power systems led to a parallel increase in the use of drones for consumer and general aviation activities. , quadcopter drones exemplify the widespread popularity of hobby radio-controlled aircraft and toys, but the use of UAVs in commercial and general aviation is limited by a lack of autonomy and by new regulatory environments which require line-of-sight contact with the pilot.
In 2020, a Kargu 2 drone hunted down and attacked a human target in Libya, according to a report from the UN Security Council's Panel of Experts on Libya, published in March 2021. This may have been the first time an autonomous killer-robot armed with lethal weaponry attacked human beings.
Superior drone technology, specifically the Turkish Bayraktar TB2, played a role in Azerbaijan's successes in the 2020 Nagorno-Karabakh war against Armenia.
UAVs are also used in NASA missions. The Ingenuity helicopter is an autonomous UAV that operated on Mars from 2021 to 2024. the Dragonfly spacecraft is being developed, and is aiming to reach and examine Saturn's moon Titan. Its primary goal is to roam around the surface, expanding the amount of area to be researched previously seen by landers. As a UAV, Dragonfly allows examination of potentially diverse types of soil. The drone is set to launch in 2027, and is estimated to take seven more years to reach the Saturnian system.
Miniaturization is also supporting the development of small UAVs which can be used as individual system or in a fleet offering the possibility to survey large areas, in a relatively small amount of time.
According to data from GlobalData, the global military uncrewed aerial systems (UAS) market, which forms a significant part of the UAV industry, is projected to experience a compound annual growth rate of 4.8% over the next decade. This represents a near doubling in market size, from $12.5 billion in 2024 to an estimated $20 billion by 2034.
Design
Crewed and uncrewed aircraft of the same type generally have recognizably similar physical components. The main exceptions are the cockpit and environmental control system or life support systems. Some UAVs carry payloads (such as a camera) that weigh considerably less than an adult human, and as a result, can be considerably smaller. Though they carry heavy payloads, weaponized military UAVs are lighter than their crewed counterparts with comparable armaments.
Small civilian UAVs have no life-critical systems, and can thus be built out of lighter but less sturdy materials and shapes, and can use less robustly tested electronic control systems. For small UAVs, the quadcopter design has become popular, though this layout is rarely used for crewed aircraft. Miniaturization means that less-powerful propulsion technologies can be used that are not feasible for crewed aircraft, such as small electric motors and batteries.
Control systems for UAVs are often different from crewed craft. For remote human control, a camera and video link almost always replace the cockpit windows; radio-transmitted digital commands replace physical cockpit controls. Autopilot software is used on both crewed and uncrewed aircraft, with varying feature sets.
Aircraft configuration
UAVs can be designed in different configurations than manned aircraft both because there is no need for a cockpit and its windows, and there is no need to optimize for human comfort, although some UAVs are adapted from piloted examples, or are designed for optionally piloted modes. Air safety is also less of a critical requirement for unmanned aircraft, allowing the designer greater freedom to experiment. Instead, UAVs are typically designed around their onboard payloads and their ground equipment. These factors have led to a great variety of airframe and motor configurations in UAVs.
For conventional flight the flying wing and blended wing body offer light weight combined with low drag and stealth, and are popular configurations for many use cases. Larger types which carry a variable payload are more likely to feature a distinct fuselage with a tail for stability, control and trim, although the wing configurations in use vary widely.
For uses that require vertical flight or hovering, the tailless quadcopter requires a relatively simple control system and is common for smaller UAVs. Multirotor designs with 6 or more rotors is more common with larger UAVs, where redundancy is prioritized.
Propulsion
Traditional internal combustion and jet engines remain in use for drones requiring long range. However, for shorter-range missions electric power has almost entirely taken over. The distance record for a UAV (built from balsa wood and mylar skin) across the North Atlantic Ocean is held by a gasoline model airplane or UAV. Manard Hill "in 2003 when one of his creations flew 1,882 miles across the Atlantic Ocean on less than a gallon of fuel" holds this record.
Besides the traditional piston engine, the Wankel rotary engine is used by some drones. This type offers high power output for lower weight, with quieter and more vibration-free running. Claims have also been made for improved reliability and greater range.
Small drones mostly use lithium-polymer batteries (Li-Po), while some larger vehicles have adopted the hydrogen fuel cell. The energy density of modern Li-Po batteries is far less than gasoline or hydrogen. However electric motors are cheaper, lighter and quieter. Complex multi-engine, multi-propeller installations are under development with the goal of improving aerodynamic and propulsive efficiency. For such complex power installations, battery elimination circuitry (BEC) may be used to centralize power distribution and minimize heating, under the control of a microcontroller unit (MCU).
Ornithopters – wing propulsion
Flapping-wing ornithopters, imitating birds or insects, have been flown as microUAVs. Their inherent stealth recommends them for spy missions.
Sub-1g microUAVs inspired by flies, albeit using a power tether, have been able to "land" on vertical surfaces. Other projects mimic the flight of beetles and other insects.
Computer control systems
UAV computing capability followed the advances of computing technology, beginning with analog controls and evolving into microcontrollers, then system-on-a-chip (SOC) and single-board computers (SBC).
Modern system hardware for UAV control is often called the flight controller (FC), flight controller board (FCB) or autopilot. Common UAV-systems control hardware typically incorporate a primary microprocessor, a secondary or failsafe processor, and sensors such as accelerometers, gyroscopes, magnetometers, and barometers into a single module.
In 2024 EASA agreed on the first certification basis for a UAV flight controller in compliance with the ETSO-C198 for Embention's autopilot. The certification of the UAV flight control systems aims to facilitate the integration of UAVs within the airspace and the operation of drones in critical areas.
Architecture
Sensors
Position and movement sensors give information about the aircraft state. Exteroceptive sensors deal with external information like distance measurements, while exproprioceptive ones correlate internal and external states.
Non-cooperative sensors are able to detect targets autonomously so they are used for separation assurance and collision avoidance.
Degrees of freedom (DOF) refers to both the amount and quality of onboard sensors: 6 DOF implies 3-axis gyroscopes and accelerometers (a typical inertial measurement unit IMU), 9 DOF refers to an IMU plus a compass, 10 DOF adds a barometer and 11 DOF usually adds a GPS receiver.
In addition to the navigation sensors, the UAV (or UAS) can be also equipped with monitoring devices such as: RGB, multispectral, hyper-spectral cameras or LiDAR, which may allow providing specific measurements or observations.
Actuators
UAV actuators include digital electronic speed controllers (which control the RPM of the motors) linked to motors/engines and propellers, servomotors (for planes and helicopters mostly), weapons, payload actuators, LEDs and speakers.
Software
Modern UAVs run a software stack that ranges from low-level firmware that directly controls actuators, to high level flight planning. At the lowest level, firmware directly controls reading from sensors such as an IMU and commanding actuators such as motors. Control software (often referred to as an autopilot) is responsible for computing actuator speeds given desired vehicle velocity. Due to its direct interaction with hardware, this software is time-critical and may run on microcontrollers. This software may also handle radio communications, in the case of UAVs that are not autonomous. One popular example is the PX4 autopilot.
At the next level, autonomy algorithms compute the desired velocity given higher level goals. For example, trajectory optimization may be used to calculate a flight trajectory given a desired goal location. This software is not necessarily time-critical, and can often run on a single board computer running an operating system such as Linux with relaxed time constraints.
Loop principles
UAVs employ open-loop, closed-loop or hybrid control architectures.
Open loop This type provides a positive control signal (faster, slower, left, right, up, down) without incorporating feedback from sensor data.
Closed loop This type incorporates sensor feedback to adjust behavior (reduce speed to reflect tailwind, move to altitude 300 feet). The PID controller is common. Sometimes, feedforward is employed, transferring the need to close the loop further.
Communications
UAVs use a radio for control and exchange of video and other data. Early UAVs had only narrowband uplink. Downlinks came later. These bi-directional narrowband radio links carried command and control (C&C) and telemetry data about the status of aircraft systems to the remote operator.
In most modern UAV applications, video transmission is required. So instead of having separate links for C&C, telemetry and video traffic, a broadband link is used to carry all types of data. These broadband links can leverage quality of service techniques and carry TCP/IP traffic that can be routed over the internet.
The radio signal from the operator side can be issued from either:
Ground control – a human operating a radio transmitter/receiver, a smartphone, a tablet, a computer, or the original meaning of a military ground control station (GCS).
Remote network system, such as satellite duplex data links for some military powers. Downstream digital video over mobile networks has also entered consumer markets, while direct UAV control uplink over the cellular mesh and LTE have been demonstrated and are in trials.
Another aircraft, serving as a relay or mobile control station military manned-unmanned teaming (MUM-T).
Modern networking standards have explicitly considered drones and therefore include optimizations. The 5G standard has mandated reduced user plane latency to 1ms while using ultra-reliable and low-latency communications.
UAV-to-UAV coordination supported by Remote ID communication technology. Remote ID messages (containing the UAV coordinates) are broadcast and can be used for collision-free navigation.
Autonomy
The level of autonomy in UAVs varies widely. UAV manufacturers often build in specific autonomous operations, such as:
Self-level: attitude stabilization on the pitch and roll axes.
Altitude hold: The aircraft maintains its altitude using barometric pressure and/or GPS data.
Hover/position hold: Keep level pitch and roll, stable yaw heading and altitude while maintaining position using GNSS or inertial sensors.
Headless mode: Pitch control relative to the position of the pilot rather than relative to the vehicle's axes.
Care-free: automatic roll and yaw control while moving horizontally
Takeoff and landing (using a variety of aircraft or ground-based sensors and systems; see also "autoland")
Failsafe: automatic landing or return-to-home upon loss of control signal
Return-to-home: Fly back to the point of takeoff (often gaining altitude first to avoid possible intervening obstructions such as trees or buildings).
Follow-me: Maintain relative position to a moving pilot or other object using GNSS, image recognition or homing beacon.
GPS waypoint navigation: Using GNSS to navigate to an intermediate location on a travel path.
Orbit around an object: Similar to Follow-me but continuously circle a target.
Pre-programmed aerobatics (such as rolls and loops)
Pre-programmed delivery (delivery drones)
One approach to quantifying autonomous capabilities is based on OODA terminology, as suggested by a 2002 US Air Force Research Laboratory report, and used in the table on the right.
Full autonomy is available for specific tasks, such as airborne refueling or ground-based battery switching.
Other functions available or under development include; collective flight, real-time collision avoidance, wall following, corridor centring, simultaneous localization and mapping and swarming, cognitive radio, and machine learning. In this context, computer vision can play an important role for automatically ensuring flight safety.
Performance considerations
Flight envelope
UAVs can be programmed to perform aggressive maneuvers or landing/perching on inclined surfaces, and then to climb toward better communication spots. Some UAVs can control flight with varying flight modelisation, such as VTOL designs.
UAVs can also implement perching on a flat vertical surface.
Endurance
UAV endurance is not constrained by the physiological capabilities of a human pilot.
Because of their small size, low weight, low vibration and high power to weight ratio, Wankel rotary engines are used in many large UAVs. Their engine rotors cannot seize; the engine is not susceptible to shock-cooling during descent and it does not require an enriched fuel mixture for cooling at high power. These attributes reduce fuel usage, increasing range or payload.
Proper drone cooling is essential for long-term drone endurance. Overheating and subsequent engine failure is the most common cause of drone failure.
Hydrogen fuel cells, using hydrogen power, may be able to extend the endurance of small UAVs, up to several hours.
Micro air vehicles endurance is so far best achieved with flapping-wing UAVs, followed by planes and multirotors standing last, due to lower Reynolds number.
Solar-electric UAVs, a concept originally championed by the AstroFlight Sunrise in 1974, have achieved flight times of several weeks.
Solar-powered atmospheric satellites ("atmosats") designed for operating at altitudes exceeding 20 km (12 miles, or 60,000 feet) for as long as five years could potentially perform duties more economically and with more versatility than low Earth orbit satellites. Likely applications include weather drones for weather monitoring, disaster recovery, Earth imaging and communications.
Electric UAVs powered by microwave power transmission or laser power beaming are other potential endurance solutions.
Another application for a high endurance UAV would be to "stare" at a battlefield for a long interval (ARGUS-IS, Gorgon Stare, Integrated Sensor Is Structure) to record events that could then be played backwards to track battlefield activities.
The delicacy of the British PHASA-35 military drone (at a late stage of development) is such that traversing the first turbulent twelve miles of atmosphere is a hazardous endeavor. It has, however, remained on station at 65,000 feet for 24 hours. Airbus' Zephyr in 2023 has attained 70,000 feet and flown for 64 days; 200 days aimed at. This is sufficiently close enough to near-space for them to be regarded in "pseudo-satellites" as regards to their operational capabilities.
Reliability
Reliability improvements target all aspects of UAV systems, using resilience engineering and fault tolerance techniques.
Individual reliability covers robustness of flight controllers, to ensure safety without excessive redundancy to minimize cost and weight. Besides, dynamic assessment of flight envelope allows damage-resilient UAVs, using non-linear analysis with ad hoc designed loops or neural networks. UAV software liability is bending toward the design and certifications of crewed avionics software.
Swarm resilience involves maintaining operational capabilities and reconfiguring tasks given unit failures.
Applications
In recent years, autonomous drones have begun to transform various application areas as they can fly beyond visual line of sight (BVLOS) while maximizing production, reducing costs and risks, ensuring site safety, security and regulatory compliance, and protecting the human workforce in times of a pandemic. They can also be used for consumer-related missions like package delivery, as demonstrated by Amazon Prime Air, and critical deliveries of health supplies.
There are numerous civilian, commercial, military, and aerospace applications for UAVs. These include:
General Recreation, disaster relief, archeology, conservation of biodiversity and habitat, law enforcement, crime, and terrorism.
Commercial Aerial surveillance, filmmaking, journalism, scientific research, surveying, cargo transport, mining, manufacturing, forestry, solar farming, thermal energy, ports and agriculture.
Warfare
As of 2020, seventeen countries have armed UAVs, and more than 100 countries use UAVs in a military capacity. The first five countries producing domestic UAV designs are the United States, China, Israel, Iran and Turkey. Top military UAV manufactures are including General Atomics, Lockheed Martin, Northrop Grumman, Boeing, Baykar, TAI, IAIO, CASC and CAIG. China has established and expanded its presence in military UAV market since 2010. In the early 2020s, Turkey also established and expanded its presence in the military UAV market.
In the early 2010s, Israeli companies mainly focused on small surveillance UAV systems, and by the number of drones, Israel exported 60.7% (2014) of UAVs on the market while the United States exported 23.9% (2014). Between 2010 and 2014, there were 439 drones exchanged compared to 322 in the five years previous to that, among these only small fraction of overall trade – just 11 (2.5%) of the 439 are armed drones. The US alone operated over 9,000 military UAVs in 2014; among them more than 7000 are RQ-11 Raven miniature UAVs. Since 2010, Chinese drone companies have begun to export large quantities of drones to the global military market. Of the 18 countries that are known to have received military drones between 2010 and 2019, the top 12 all purchased their drones from China. The shift accelerated in the 2020s due to China's advancement in drone technologies and manufacturing, compounded by market demand from the Russian invasion of Ukraine and the Israel-Gaza conflict.
For intelligence and reconnaissance missions, the inherent stealth of micro UAV flapping-wing ornithopters, imitating birds or insects, offers potential for covert surveillance and makes them difficult targets to bring down.
Unmanned surveillance and reconnaissance aerial vehicle are used for reconnaissance, attack, demining, and target practice.
Following the 2022 Russian invasion of Ukraine a dramatic increase in UAV development took place with Ukraine creating the Brave1 platform to promote rapid development of innovative systems.
Civil applications
The civilian (commercial and general) drone market is dominated by Chinese companies. Chinese manufacturer DJI alone had 74% of the civil market share in 2018, with no other company accounting for more than 5%. The companies continue to hold over 70% of global market share by 2023, despite under increasing scrutinies and sanctions from the United States. The US Interior Department grounded its fleet of DJI drones in 2020, while the Justice Department prohibited the use of federal funds for the purchase of DJI and other foreign-made UAVs.
DJI is followed by American company 3D Robotics, Chinese company Yuneec, Autel Robotics, and French company Parrot.
As of May 2021, 873,576 UAVs had been registered with the US FAA, of which 42% were categorized as commercial and 58% as recreational. 2018 NPD point to consumers increasingly purchasing drones with more advanced features with 33 percent growth in both the $500+ and $1000+ market segments.
The civil UAV market is relatively new compared to the military one. Companies are emerging in both developed and developing nations at the same time. Many early-stage startups have received support and funding from investors, as is the case in the United States, and from government agencies, as is the case in India. Some universities offer research and training programs or degrees. Private entities also provide online and in-person training programs for both recreational and commercial UAV use.
Consumer drones are widely used by police and military organizations worldwide because of the cost-effective nature of consumer products. Since 2018, the Israeli military have used DJI UAVs for light reconnaissance missions. DJI drones have been used by Chinese police in Xinjiang since 2017 and American police departments nationwide since 2018. Both Ukraine and Russia used commercial DJI drones extensively during the Russian invasion of Ukraine. These civilian DJI drones were sourced by governments, hobbyists, international donations to Ukraine and Russia to support each side on the battlefield, and were often flown by drone hobbyists recruited by the armed forces. The prevalence of DJI drones was attributable to their market dominance, affordability, high performance, and reliability.
Entertainment
Drones are also used in nighttime displays for artistic and advertising purposes with the main benefits are that they are safer, quieter and better for the environment than fireworks. They can replace or be an adjunct for fireworks displays to reduce the financial burden of festivals. In addition they can complement fireworks due to the ability for drones to carry them, creating new forms of artwork in the process.
Drones can also be used for racing, either with or without VR functionality.
Aerial photography
Drones are ideally suited to capturing aerial shots in photography and cinematography, and are widely used for this purpose. Small drones avoid the need for precise coordination between pilot and cameraman, with the same person taking on both roles. Big drones with professional cine cameras usually have a drone pilot and a camera operator who controls camera angle and lens. For example, the AERIGON cinema drone, used in film production, is operated by two people. Drones provide access to dangerous, remote or otherwise inaccessible sites.
Environmental monitoring
UASs or UAVs offer the great advantage for environmental monitoring to generate a new generation of survey at very-high or ultra-high resolution both in space and time. This gives the opportunity to bridge the existing gap between satellite data and field monitoring. This has stimulated a huge number of activities in order to enhance the description of natural and agricultural ecosystems. Most common applications are:
Topographic surveys for the production of orthomosaics, digital surface models and 3D models;
Monitoring of natural ecosystems for biodiversity monitoring, habitat mapping, detection of invasive alien species and study of ecosystem degradation due to invasive species or disturbances;
Precision agriculture which exploits all available technologies including UAV in order to produce more with less (e.g., optimisation of fertilizers, pesticides, irrigation);
River monitoring several methods have been developed to perform flow monitoring using image velocimetry methods which allow to properly describe the 2D flow velocity fields.
Structural integrity of any type of structure whether it be a dam, railway or other dangerous, inaccessible or massive locations for building monitoring.
Mineral detection for acid mine drainage using UAVs and hyperspectral cameras can produce detailed maps of proxy minerals (e.g. goethite, jarosite) for certain pH-values in natural, mining and post-mining environments, such as remediated sites.
These activities can be completed with different measurements, such as photogrammetry, thermography, multispectral images, 3D field scanning, and normalized difference vegetation index maps.
Geological hazards
UAVs have become a widely used tool for studying geohazards such as landslides. Various sensors, including radar, optical, and thermal, can be mounted on UAVs to monitor different properties. UAVs enable the capture of images of various landslide features, such as transverse, radial, and longitudinal cracks, ridges, scarps, and surfaces of rupture, even in inaccessible areas of the sliding mass. Moreover, processing the optical images captured by UAVs also allows for the creation of point clouds and 3D models, from which these properties can be derived. Comparing point clouds obtained at different times allows for the detection of changes caused by landslide deformation.
Mineral exploration
UAVs may help in the discovery of new or reevaluation of known mineral deposits to meet the demand for raw materials such as critical raw metals (e.g. cobalt, nickel), rare earths and battery minerals. By employing a suite of sensors (e.g. spectral imaging, Lidar, magnetics, gamma-ray spectroscopy), and similar to those used in environmental monitoring, UAV-based data can produce maps of geological surface and subsurface features, contributing to more efficient and targeted mineral exploration.
Agriculture, forestry and environmental studies
As global demand for food production grows exponentially, resources are depleted, farmland is reduced, and agricultural labor is increasingly in short supply, there is an urgent need for more convenient and smarter agricultural solutions than traditional methods, and the agricultural drone and robotics industry is expected to make progress. Agricultural drones have been used to help build sustainable agriculture all over the world leading to a new generation of agriculture. In this context, there is a proliferation of innovations in both tools and methodologies which allow precise description of vegetation state and also may help to precisely distribute nutrients, pesticides or seeds over a field.
The use of UAVs is also being investigated to help detect and fight wildfires, whether through observation or launching pyrotechnic devices to start backfires.
UAVs are also now widely used to survey wildlife such as nesting seabirds, seals and even wombat burrows.
Law enforcement
Police can use drones for applications such as search and rescue and traffic monitoring.
Humanitarian aid
Drones are increasingly finding their application in humanitarian aid and disaster relief, where they are used for a wide range of applications such as delivering food, medicine and essential items to remote areas or image mapping before and following disasters.
Safety and security
Threats
Nuisance
UAVs can threaten airspace security in numerous ways, including unintentional collisions or other interference with other aircraft, deliberate attacks or by distracting pilots or flight controllers. The first incident of a drone-airplane collision occurred in mid-October 2017 in Quebec City, Canada. The first recorded instance of a drone collision with a hot air balloon occurred on 10 August 2018 in Driggs, Idaho, United States; although there was no significant damage to the balloon nor any injuries to its 3 occupants, the balloon pilot reported the incident to the National Transportation Safety Board, stating that "I hope this incident helps create a conversation of respect for nature, the airspace, and rules and regulations". Unauthorized UAV flights into or near major airports have prompted extended shutdowns of commercial flights.
Drones caused significant disruption at Gatwick Airport during December 2018, needing the deployment of the British Army.
In the United States, flying close to a wildfire is punishable by a maximum $25,000 fine. Nonetheless, in 2014 and 2015, firefighting air support in California was hindered on several occasions, including at the Lake Fire and the North Fire. In response, California legislators introduced a bill that would allow firefighters to disable UAVs which invaded restricted airspace. The FAA later required registration of most UAVs.
Security vulnerabilities
By 2017, drones were being used to drop contraband into prisons.
The interest in UAVs cybersecurity has been raised greatly after the Predator UAV video stream hijacking incident in 2009, where Islamic militants used cheap, off-the-shelf equipment to stream video feeds from a UAV. Another risk is the possibility of hijacking or jamming a UAV in flight. Several security researchers have made public some vulnerabilities in commercial UAVs, in some cases even providing full source code or tools to reproduce their attacks. At a workshop on UAVs and privacy in October 2016, researchers from the Federal Trade Commission showed they were able to hack into three different consumer quadcopters and noted that UAV manufacturers can make their UAVs more secure by the basic security measures of encrypting the Wi-Fi signal and adding password protection.
Aggression
Many UAVs have been loaded with dangerous payloads, and/or crashed into targets. Payloads have included or could include explosives, chemical, radiological or biological hazards. UAVs with generally non-lethal payloads could possibly be hacked and put to malicious purposes. Counter-UAV systems (C-UAS), from detection to electronic warfare to UAVs designed to destroy other UAVs, are in development and being deployed by states to counter this threat.
Such developments have occurred despite the difficulties. As J. Rogers stated in a 2017 interview to A&T, "There is a big debate out there at the moment about what the best way is to counter these small UAVs, whether they are used by hobbyists causing a bit of a nuisance or in a more sinister manner by a terrorist actor".
Countermeasures
Counter unmanned air system
The malicious use of UAVs has led to the development of counter unmanned air system (C-UAS) technologies. Automatic tracking and detection of UAVs from commercial cameras have become accurate thanks to the development of deep learning based machine learning algorithms. It is also possible to automatically identify UAVs across different cameras with different viewpoints and hardware specification with re-identification methods. Commercial systems such as the Aaronia AARTOS have been installed on major international airports. Once a UAV is detected, it can be countered with kinetic force (missiles, projectiles or another UAV) or by non-kinetic force (laser, microwaves, communications jamming). Anti-aircraft missile systems such as the Iron Dome are also being enhanced with C-UAS technologies. Utilising a smart UAV swarm to counter one or more hostile UAVs is also proposed.
Regulation
Regulatory bodies around the world are developing unmanned aircraft system traffic management solutions to better integrate UAVs into airspace.
The use of unmanned aerial vehicles is becoming increasingly regulated by the civil aviation authorities of individual countries. Regulatory regimes can differ significantly according to drone size and use. The International Civil Aviation Organization (ICAO) began exploring the use of drone technology as far back as 2005, which resulted in a 2011 report. France was among the first countries to set a national framework based on this report and larger aviation bodies such as the FAA and the EASA quickly followed suit. In 2021, the FAA published a rule requiring all commercially used UAVs and all UAVs regardless of intent weighing 250 g or more to participate in Remote ID, which makes drone locations, controller locations, and other information public from takeoff to shutdown; this rule has since been challenged in the pending federal lawsuit RaceDayQuads v. FAA.
EU Drone Certification - Class Identification Label
The implementation of the Class Identification Label serves a crucial purpose in the regulation and operation of drones. The label is a verification mechanism designed to confirm that drones within a specific class meet the rigorous standards set by administrations for design and manufacturing. These standards are necessary to ensure the safety and reliability of drones in various industries and applications.
By providing this assurance to customers, the Class Identification Label helps to increase confidence in drone technology and encourages wider adoption across industries. This, in turn, contributes to the growth and development of the drone industry and supports the integration of drones into society.
Export controls
The export of UAVs or technology capable of carrying a 500 kg payload at least 300 km is restricted in many countries by the Missile Technology Control Regime.
| Technology | Aviation | null |
58906 | https://en.wikipedia.org/wiki/Gland | Gland | A gland is a cell or an organ in an animal's body that produces and secretes different substances that the organism needs, either into the bloodstream or into a body cavity or outer surface. A gland may also function to remove unwanted substances such as urine from the body.
There are two types of gland, each with a different method of secretion. Endocrine glands are ductless and secrete their products, hormones, directly into interstitial spaces to be taken up into the bloodstream. Exocrine glands secrete their products through a duct into a body cavity or outer surface.
Glands are mostly composed of epithelial tissue, and typically have a supporting framework of connective tissue, and a capsule.
Structure
Development
Every gland is formed by an ingrowth from an epithelial surface. This ingrowth may in the beginning possess a tubular structure, but in other instances glands may start as a solid column of cells which subsequently becomes tubulated.
As growth proceeds, the column of cells may split or give off offshoots, in which case a compound gland is formed. In many glands, the number of branches is limited, in others (salivary, pancreas) a very large structure is finally formed by repeated growth and sub-division. As a rule, the branches do not unite with one another. One exception to this rule is the liver; this occurs when a reticulated compound gland is produced. In compound glands the more typical or secretory epithelium is found forming the terminal portion of each branch, and the uniting portions form ducts and are lined with a less modified type of epithelial cell.
Glands are classified according to their shape.
If the gland retains its shape as a tube throughout it is termed a tubular gland.
In the second main variety of gland the secretory portion is enlarged and the lumens variously increased in size. These are termed alveolar or saccular glands.
Types of glands
Glands are divided based on their function into two groups:
Endocrine glands
Endocrine glands secrete substances that circulate through the bloodstream. The glands secrete their products through basal lamina into the bloodstream. Basal lamina typically can be seen as a layer around the glands to which more than a million tiny blood vessels are attached. These glands often secrete hormones which play an important role in maintaining homeostasis. The pineal gland, thymus gland, pituitary gland, thyroid gland, and the two adrenal glands are all endocrine glands.
Exocrine glands
Exocrine glands secrete their products through a duct onto an outer or inner surface of the body, such as the skin or the gastrointestinal tract. Secretion is directly onto the apical surface. The glands in this group can be divided into three groups:
Merocrine glands – cells secrete their substances by exocytosis. (e.g. mucous and serous glands; also called "eccrine", e.g. major sweat glands of humans, goblet cells, salivary gland, tear gland and intestinal glands)
Apocrine glands – a portion of the secreting cell's body is lost during secretion. The term Apocrine gland is often used to refer to the apocrine sweat glands, however it is thought that apocrine sweat glands may not be true apocrine glands as they may not use the apocrine method of secretion. (e.g. mammary gland, sweat gland of arm pit, pubic region, skin around anus, lips and nipples)
Holocrine glands – the entire cell disintegrates to secrete its substances. (e.g. sebaceous glands: meibomian and zeis glands)
Exocrine glands can further be categorized by their product:
Serous glands secrete a watery, often protein-rich, fluid-like product, e.g. sweat glands.
Mucous glands secrete a viscous product, rich in carbohydrates (such as glycoproteins), e.g. goblet cells.
Sebaceous glands secrete a lipid product. These glands are also known as oil glands, e.g. Fordyce spots and meibomian glands.
Clinical significance
Adenosis is any disease of a gland. The diseased gland has abnormal formation or development of glandular tissue which is sometimes tumorous.
| Biology and health sciences | Animal anatomy and morphology | Biology |
58911 | https://en.wikipedia.org/wiki/Measles | Measles | Measles (probably from Middle Dutch or Middle High German masel(e) ("blemish, blood blister")) is a highly contagious, vaccine-preventable infectious disease caused by measles virus. Other names include morbilli, rubeola, red measles, and English measles. Both rubella, also known as German measles, and roseola are different diseases caused by unrelated viruses.
Symptoms usually develop 10–12 days after exposure to an infected person and last 7–10 days. Initial symptoms typically include fever, often greater than , cough, runny nose, and inflamed eyes. Small white spots known as Koplik's spots may form inside the mouth two or three days after the start of symptoms. A red, flat rash which usually starts on the face and then spreads to the rest of the body typically begins three to five days after the start of symptoms. Common complications include diarrhea (in 8% of cases), middle ear infection (7%), and pneumonia (6%). These occur in part due to measles-induced immunosuppression. Less commonly seizures, blindness, or inflammation of the brain may occur.
Measles is an airborne disease which spreads easily from one person to the next through the coughs and sneezes of infected people. It may also be spread through direct contact with mouth or nasal secretions. It is extremely contagious: nine out of ten people who are not immune and share living space with an infected person will be infected. Furthermore, measles's reproductive number estimates vary beyond the frequently cited range of 12 to 18. The NIH quote this 2017 paper saying: "[a] review in 2017 identified feasible measles R values of 3.7–203.3". People are infectious to others from four days before to four days after the start of the rash. While often regarded as a childhood illness, it can affect people of any age. Most people do not get the disease more than once. Testing for the measles virus in suspected cases is important for public health efforts. Measles is not known to occur in other animals.
Once a person has become infected, no specific treatment is available, although supportive care may improve outcomes. Such care may include oral rehydration solution (slightly sweet and salty fluids), healthy food, and medications to control the fever. Antibiotics should be prescribed if secondary bacterial infections such as ear infections or pneumonia occur. Vitamin A supplementation is also recommended for children. Among cases reported in the U.S. between 1985 and 1992, death occurred in only 0.2% of cases, but may be up to 10% in people with malnutrition. Most of those who die from the infection are less than five-years old.
The measles vaccine is effective at preventing the disease, is exceptionally safe, and is often delivered in combination with other vaccines. Due to the ease with which measles is transmitted from person to person in a community, more than 95% of the community must be vaccinated in order to achieve herd immunity. Vaccination resulted in an 80% decrease in deaths from measles between 2000 and 2017, with about 85% of children worldwide having received their first dose as of 2017.
Measles affects about 20 million people a year, primarily in the developing areas of Africa and Asia. It is one of the leading vaccine-preventable disease causes of death. In 1980, 2.6 million people died from measles, and in 1990, 545,000 died due to the disease; by 2014, global vaccination programs had reduced the number of deaths from measles to 73,000. Despite these trends, rates of disease and deaths increased from 2017 to 2019 due to a decrease in immunization.
Signs and symptoms
Symptoms typically begin 10–14 days after exposure. The classic symptoms include a four-day fever (the four Ds) and the three Cs—cough, coryza (head cold, fever, sneezing), and conjunctivitis (red eyes)—along with a maculopapular rash. Fever is common and typically lasts for about one week; the fever seen with measles is often as high as .
Koplik's spots seen inside the mouth are diagnostic for measles, but are temporary and therefore rarely seen. Koplik spots are small white spots that are commonly seen on the inside of the cheeks opposite the molars. They appear as "grains of salt on a reddish background." Recognizing these spots before a person reaches their maximum infectiousness can help reduce the spread of the disease.
The characteristic measles rash is classically described as a generalized red maculopapular rash that begins several days after the fever starts. It starts on the back of the ears and, after a few hours, spreads to the head and neck before spreading to cover most of the body. The measles rash appears two to four days after the initial symptoms and lasts for up to eight days. The rash is said to "stain", changing color from red to dark brown, before disappearing. Overall, measles usually resolves after about three weeks.
People who have been vaccinated against measles but have incomplete protective immunity may experience a form of modified measles. Modified measles is characterized by a prolonged incubation period, milder, and less characteristic symptoms (sparse and discrete rash of short duration).
Complications
Complications of measles are relatively common, ranging from mild ones such as diarrhea to serious ones such as pneumonia (either direct viral pneumonia or secondary bacterial pneumonia), laryngotracheobronchitis (croup) (either direct viral laryngotracheobronchitis or secondary bacterial bronchitis), otitis media, acute brain inflammation, corneal ulceration (leading to corneal scarring), and in about 1 in 600 unvaccinated infants under 15 months while more rarely in older children and adults, subacute sclerosing panencephalitis, which is progressive and eventually lethal.
In addition, measles can suppress the immune system for weeks to months, and this can contribute to bacterial superinfections such as otitis media and bacterial pneumonia. Two months after recovery there is a 11–73% decrease in the number of antibodies against other bacteria and viruses.
The death rate in the 1920s was around 30% for measles pneumonia. People who are at high risk for complications are infants and children aged less than 5 years; adults aged over 20 years; pregnant women; people with compromised immune systems, such as from leukemia, HIV infection or innate immunodeficiency; and those who are malnourished or have vitamin A deficiency. Complications are usually more severe in adults. Between 1987 and 2000, the case fatality rate across the United States was three deaths per 1,000 cases attributable to measles, or 0.3%. In underdeveloped nations with high rates of malnutrition and poor healthcare, fatality rates have been as high as 28%. In immunocompromised persons (e.g., people with AIDS) the fatality rate is approximately 30%.
Even in previously healthy children, measles can cause serious illness requiring hospitalization. One out of every 1,000 measles cases progresses to acute encephalitis, which often results in permanent brain damage. One to three out of every 1,000 children who become infected with measles will die from respiratory and neurological complications.
Cause
Measles is caused by the measles virus, a single-stranded, negative-sense, enveloped RNA virus of the genus Morbillivirus within the family Paramyxoviridae.
The virus is highly contagious and is spread by coughing and sneezing via close personal contact or direct contact with secretions. Measles is the most contagious virus known. It remains infective for up to two hours in that airspace or nearby surfaces. Measles is so contagious that if one person has it, 90% of non-immune people who have close contact with them (e.g., household members) will also become infected. Humans are the only natural hosts of the virus, and no other animal reservoirs are known to exist, although mountain gorillas are believed to be susceptible to the disease.
Risk factors for measles virus infection include immunodeficiency caused by HIV/AIDS, immunosuppression following receipt of an organ or a stem cell transplant, alkylating agents, or corticosteroid therapy, regardless of immunization status; travel to areas where measles commonly occurs or contact with travelers from such an area; and the loss of passive, inherited antibodies before the age of routine immunization.
Pathophysiology
Once the measles virus gets onto the mucosa, it infects the epithelial cells in the trachea or bronchi. Measles virus uses a protein on its surface called hemagglutinin (H protein), to bind to a target receptor on the host cell, which could be CD46, which is expressed on all nucleated human cells, CD150, aka signaling lymphocyte activation molecule or SLAM, which is found on immune cells like B or T cells, and antigen-presenting cells, or nectin-4, a cellular adhesion molecule. Once bound, the fusion, or F protein helps the virus fuse with the membrane and ultimately get inside the cell.
As the virus is a single-stranded negative-sense RNA virus, it includes the enzyme RNA-dependent RNA polymerase (RdRp) which is used to transcribe its genome into a positive-sense mRNA strand.
After entering a cell, it is ready to be translated into viral proteins, wrapped in the cell's lipid envelope, and sent out of the cell as a newly made virus. Within days, the measles virus spreads through local tissue and is picked up by dendritic cells and alveolar macrophages, and carried from that local tissue in the lungs to the local lymph nodes. From there it continues to spread, eventually getting into the blood and spreading to more lung tissue, as well as other organs like the intestines and the brain. Functional impairment of the infected dendritic cells by the measles virus is thought to contribute to measles-induced immunosuppression.
Diagnosis
Typically, clinical diagnosis begins with the onset of fever and malaise about 10 days after exposure to the measles virus, followed by the emergence of cough, coryza, and conjunctivitis that worsen in severity over 4 days of appearing. Observation of Koplik's spots is also diagnostic. Other possible condition that can result in these symptoms include parvovirus, dengue fever, Kawasaki disease, and scarlet fever. Laboratory confirmation is, however, strongly recommended.
Laboratory testing
Laboratory diagnosis of measles can be done with confirmation of positive measles IgM antibodies or detection of measles virus RNA from throat, nasal or urine specimen by using the reverse transcription polymerase chain reaction assay. This method is particularly useful to confirm cases when the IgM antibodies results are inconclusive. For people unable to have their blood drawn, saliva can be collected for salivary measles-specific IgA testing. Salivary tests used to diagnose measles involve collecting a saliva sample and testing for the presence of measles antibodies. This method is not ideal, as saliva contains many other fluids and proteins which may make it difficult to collect samples and detect measles antibodies. Saliva also contains 800 times fewer antibodies than blood samples do, which makes salivary testing additionally difficult. Positive contact with other people known to have measles adds evidence to the diagnosis.
Prevention
Mothers who are immune to measles pass antibodies to their children while they are still in the womb, especially if the mother acquired immunity through infection rather than vaccination. Such antibodies will usually give newborn infants some immunity against measles, but these antibodies are gradually lost over the course of the first nine months of life. Infants under one year of age whose maternal anti-measles antibodies have disappeared become susceptible to infection with the measles virus.
In developed countries, it is recommended that children be immunized against measles at 12 months, generally as part of a three-part MMR vaccine (measles, mumps, and rubella). The vaccine is generally not given before this age because such infants respond inadequately to the vaccine due to an immature immune system. A second dose of the vaccine is usually given to children between the ages of four and five, to increase rates of immunity. Measles vaccines have been given to over a billion people. Vaccination rates have been high enough to make measles relatively uncommon. Adverse reactions to vaccination are rare, with fever and pain at the injection site being the most common. Life-threatening adverse reactions occur in less than one per million vaccinations (<0.0001%).
In developing countries where measles is common, the World Health Organization (WHO) recommends two doses of vaccine be given, at six and nine months of age. The vaccine should be given whether the child is HIV-infected or not. The vaccine is less effective in HIV-infected infants than in the general population, but early treatment with antiretroviral drugs can increase its effectiveness. Measles vaccination programs are often used to deliver other child health interventions as well, such as bed nets to protect against malaria, antiparasite medicine and vitamin A supplements, and so contribute to the reduction of child deaths from other causes.
The Advisory Committee on Immunization Practices (ACIP) recommends that all adult international travelers who do not have positive evidence of previous measles immunity receive two doses of MMR vaccine before traveling, although birth before 1957 is presumptive evidence of immunity. Those born in the United States before 1957 are likely to have been naturally infected with measles virus and generally need not be considered susceptible.
There have been false claims of an association between the measles vaccine and autism; this incorrect concern has reduced the rate of vaccination and increased the number of cases of measles where immunization rates became too low to maintain herd immunity. Additionally, there have been false claims that measles infection protects against cancer.
Administration of the MMR vaccine may prevent measles after exposure to the virus (post-exposure prophylaxis). Post-exposure prophylaxis guidelines are specific to jurisdiction and population. Passive immunization against measles by an intramuscular injection of antibodies could be effective up to the seventh day after exposure. Compared to no treatment, the risk of measles infection is reduced by 83%, and the risk of death by measles is reduced by 76%. However, the effectiveness of passive immunization in comparison to active measles vaccine is not clear.
The MMR vaccine is 95% effective for preventing measles after one dose if the vaccine is given to a child who is 12 months or older; if a second dose of the MMR vaccine is given, it will provide immunity in 99% of children.
There is no evidence that the measles vaccine virus can be transmitted to other persons.
Treatment
There is no specific antiviral treatment if measles develops. Instead the medications are generally aimed at treating superinfections, maintaining good hydration with adequate fluids, and pain relief. Some groups, like young children and the severely malnourished, are also given vitamin A, which acts as an immunomodulator that boosts the antibody responses to measles and decreases the risk of serious complications.
Medications
Treatment is supportive, with ibuprofen or paracetamol (acetaminophen) to reduce fever and pain and, if required, a fast-acting medication to dilate the airways for cough. As for aspirin, some research has suggested a correlation between children who take aspirin and the development of Reye syndrome.
The use of vitamin A during treatment is recommended to decrease the risk of blindness; however, it does not prevent or cure the disease. A systematic review of trials into its use found no reduction in overall mortality, but two doses (200,000 IU) of vitamin A was shown to reduce mortality for measles in children younger than two years of age. It is unclear if zinc supplementation in children with measles affects outcomes as it has not been sufficiently studied. There are no adequate studies on whether Chinese medicinal herbs are effective.
Prognosis
Most people survive measles, though in some cases, complications may occur. About 1 in 4 individuals will be hospitalized and 1–2 in 1,000 will die. Complications are more likely in children under age 5 and adults over age 20. Pneumonia is the most common fatal complication of measles infection and accounts for 56–86% of measles-related deaths.
Possible consequences of measles virus infection include laryngotracheobronchitis, sensorineural hearing loss, and—in about 1 in 10,000 to 1 in 300,000 cases—panencephalitis, which is usually fatal. Acute measles encephalitis is another serious risk of measles virus infection. It typically occurs two days to one week after the measles rash breaks out and begins with very high fever, severe headache, convulsions and altered mentation. A person with measles encephalitis may become comatose, and death or brain injury may occur.
For people having had measles, it is rare to ever have a symptomatic reinfection.
The measles virus can deplete previously acquired immune memory by killing cells that make antibodies, and thus weakens the immune system, which can cause deaths from other diseases. Suppression of the immune system by measles lasts about two years and has been epidemiologically implicated in up to 90% of childhood deaths in third world countries, and historically may have caused rather more deaths in the United States, the UK and Denmark than were directly caused by measles. Although the measles vaccine contains an attenuated strain, it does not deplete immune memory.
Epidemiology
Measles is extremely infectious and its continued circulation in a community depends on the generation of susceptible hosts by birth of children. In communities that generate insufficient new hosts the disease will die out. This concept was first recognized in measles by Bartlett in 1957, who referred to the minimum number supporting measles as the critical community size (CCS). Analysis of outbreaks in island communities suggested that the CCS for measles is around 250,000. Due to the ease with which measles is transmitted from person to person in a community, more than 95% of the community must be vaccinated in order to achieve herd immunity.
In 2011, the WHO estimated that 158,000 deaths were caused by measles. This is down from 630,000 deaths in 1990. As of 2018, measles remains a leading cause of vaccine-preventable deaths in the world. In developed countries the mortality rate is lower, for example in England and Wales from 2007 to 2017 death occurred between two and three cases out of 10,000. In children one to three cases out of every 1,000 die in the United States (0.1–0.2%). In populations with high levels of malnutrition and a lack of adequate healthcare, mortality can be as high as 10%. In cases with complications, the rate may rise to 20–30%. In 2012, the number of deaths due to measles was 78% lower than in 2000 due to increased rates of immunization among UN member states.
Even in countries where vaccination has been introduced, rates may remain high. Measles is a leading cause of vaccine-preventable childhood mortality. Worldwide, the fatality rate has been significantly reduced by a vaccination campaign led by partners in the Measles Initiative: the American Red Cross, the United States CDC, the United Nations Foundation, UNICEF and the WHO. Globally, measles fell 60% from an estimated 873,000 deaths in 1999 to 345,000 in 2005. Estimates for 2008 indicate deaths fell further to 164,000 globally, with 77% of the remaining measles deaths in 2008 occurring within the Southeast Asian region. There were 142,300 measles related deaths globally in 2018, of which most cases were reported from African and eastern Mediterranean regions. These estimates were slightly higher than that of 2017, when 124,000 deaths were reported due to measles infection globally.
In 2000, the WHO established the Global Measles and Rubella Laboratory Network (GMRLN) to provide laboratory surveillance for measles, rubella, and congenital rubella syndrome. Data from 2016 to 2018 show that the most frequently detected measles virus genotypes are decreasing, suggesting that increasing global population immunity has decreased the number of chains of transmission.
Cases reported in the first three months of 2019, were 300% higher than in the first three months of 2018, with outbreaks in every region of the world, even in countries with high overall vaccination coverage where it spread among clusters of unvaccinated people. The numbers of reported cases as of mid-November is over 413,000 globally, with an additional 250,000 cases in DRC (as reported through their national system), similar to the increasing trends of infection reported in the earlier months of 2019, compared to 2018. In 2019, the total number of cases worldwide climbed to 869,770. The number of cases reported for 2020 is lower compare to 2019. According to the WHO, the COVID-19 pandemic hindered vaccination campaigns in at least 68 countries, including in countries that were experiencing outbreaks, which caused increased risk of additional cases.
In 2022, there were an estimated 136,000 measles deaths globally, mostly among unvaccinated or under vaccinated children under the age of 5 years.
In February 2024, the World Health Organization said more than half of the world was at risk of a measles outbreak due to Covid-19 pandemic-related disruptions in that month. All the world regions have reported such outbreaks with the exception of the Americas, though these could still be expected to become hotspots in the future. Death rates during the outbreaks tend to be higher among poorer countries but middle income nations are also heavily impacted, according to the WHO.
In November 2024, the WHO and CDC reported that measles cases increased by 20% last year, primarily due to insufficient vaccine coverage in the world’s poorest and conflict-affected regions. Nearly half of the major outbreaks occurred in Africa, where deaths rose by 37%, the agencies noted. In 2023, approximately 10.3 million cases of the highly contagious disease were reported, up from 8.65 million the previous year.
Europe
In England and Wales, though deaths from measles were uncommon, they averaged about 500 per year in the 1940s. Deaths diminished with the improvement of medical care in the 1950s but the incidence of the disease did not retreat until vaccination was introduced in the late 1960s. Wider coverage was achieved in the 1980s with the measles, mumps and rubella, MMR vaccine.
In 2013–14, there were almost 10,000 cases in 30 European countries. Most cases occurred in unvaccinated individuals and over 90% of cases occurred in Germany, Italy, Netherlands, Romania, and United Kingdom. Between October 2014 and March 2015, a measles outbreak in the German capital of Berlin resulted in at least 782 cases. In 2017, numbers continued to increase in Europe to 21,315 cases, with 35 deaths. In preliminary figures for 2018, reported cases in the region increased 3-fold to 82,596 in 47 countries, with 72 deaths; Ukraine had the most cases (53,218), with the highest incidence rates being in Ukraine (1209 cases per million), Serbia (579), Georgia (564) and Albania (500). The previous year (2017) saw an estimated measles vaccine coverage of 95% for the first dose and 90% for the second dose in the region, the latter figure being the highest-ever estimated second-dose coverage.
In 2019, the United Kingdom, Albania, the Czech Republic, and Greece lost their measles-free status due to ongoing and prolonged spread of the disease in these countries. In the first 6 months of 2019, 90,000 cases occurred in Europe.
Americas
As a result of widespread vaccination, the disease was declared eliminated from the Americas in 2016. However, there were cases again in 2017, 2018, 2019, and 2020 in this region.
United States
In the United States, measles affected approximately 3,000 people per million in the 1960s before the vaccine was available. With consistent widespread childhood vaccination, this figure fell to 13 cases per million by the 1980s, and to about 1 case per million by 2000.
In 1991, an outbreak of measles in Philadelphia was centered at the Faith Tabernacle Congregation, a faith healing church that actively discouraged parishioners from vaccinating their children. Over 1400 people were infected with measles and nine children died.
Before immunization in the United States, between three and four million cases occurred each year. The United States was declared free of circulating measles in 2000, with 911 cases from 2001 to 2011. In 2014 the CDC said endemic measles, rubella, and congenital rubella syndrome had not returned to the United States. Occasional measles outbreaks persist, however, because of cases imported from abroad, of which more than half are the result of unvaccinated U.S. residents who are infected abroad and infect others upon return to the United States. The CDC continues to recommend measles vaccination throughout the population to prevent outbreaks like these.
In 2014, an outbreak was initiated in Ohio when two unvaccinated Amish men harboring asymptomatic measles returned to the United States from missionary work in the Philippines. Their return to a community with low vaccination rates led to an outbreak that rose to include a total of 383 cases across nine counties. Of the 383 cases, 340 (89%) occurred in unvaccinated individuals.
From 4 January, to 2 April 2015, there were 159 cases of measles reported to the CDC. Of those 159 cases, 111 (70%) were determined to have come from an earlier exposure in late December 2014. This outbreak was believed to have originated from the Disneyland theme park in California. The Disneyland outbreak was held responsible for the infection of 147 people in seven U.S. states as well as Mexico and Canada, the majority of which were either unvaccinated or had unknown vaccination status. Of the cases 48% were unvaccinated and 38% were unsure of their vaccination status. The initial exposure to the virus was never identified.
In 2015, a U.S. woman in Washington state died of pneumonia, as a result of measles. She was the first fatality in the U.S. from measles since 2003. The woman had been vaccinated for measles and was taking immunosuppressive drugs for another condition. The drugs suppressed the woman's immunity to measles, and the woman became infected with measles; she did not develop a rash, but contracted pneumonia, which caused her death.
In June 2017, the Maine Health and Environmental Testing Laboratory confirmed a case of measles in Franklin County. This instance marks the first case of measles in 20 years for the state of Maine. In 2018, one case occurred in Portland, Oregon, with 500 people exposed; 40 of them lacked immunity to the virus and were being monitored by county health officials as of 2 July 2018. There were 273 cases of measles reported throughout the United States in 2018, including an outbreak in Brooklyn with more than 200 reported cases from October 2018 to February 2019. The outbreak was tied with population density of the Orthodox Jewish community, with the initial exposure from an unvaccinated child that caught measles while visiting Israel.
A resurgence of measles occurred during 2019, which has been generally tied to parents choosing not to have their children vaccinated as most of the reported cases have occurred in people 19 years old or younger. Cases were first reported in Washington state in January, with an outbreak of at least 58 confirmed cases most within Clark County, which has a higher rate of vaccination exemptions compared to the rest of the state; nearly one in four kindergartners in Clark did not receive vaccinations, according to state data. This led Washington state governor Jay Inslee to declare a state of emergency, and the state's congress to introduce legislation to disallow vaccination exemption for personal or philosophical reasons. In April 2019, New York Mayor Bill de Blasio declared a public health emergency because of "a huge spike" in cases of measles where there were 285 cases centred on the Orthodox Jewish areas of Brooklyn in 2018, while there were only two cases in 2017. There were 168 more in neighboring Rockland County. Other outbreaks have included Santa Cruz County and Butte County in California, and the states of New Jersey and Michigan. , there have been 695 cases of measles reported in 22 states. This is the highest number of measles cases since it was declared eradicated in 2000. From 1 January, to 31 December 2019, 1,282 individual cases of measles were confirmed in 31 states. This is the greatest number of cases reported in the U.S. since 1992. Of the 1,282 cases, 128 of the people who got measles were hospitalized, and 61 reported having complications, including pneumonia and encephalitis.
Following the end of the 2019 outbreak, reported cases have fallen to pre-outbreak levels: 13 cases in 2020, 49 cases in 2021, and 121 cases in 2022.
Brazil
The spread of measles had been interrupted in Brazil in 2016, with the last-known case twelve months earlier. This last case was in the state of Ceará.
Brazil won a measles elimination certificate by the Pan American Health Organization in 2016, but the Ministry of Health has proclaimed that the country has struggled to keep this certificate, since two outbreaks had already been identified in 2018, one in the state of Amazonas and another one in Roraima, in addition to cases in other states (Rio de Janeiro, Rio Grande do Sul, Pará, São Paulo and Rondônia), totaling 1,053 confirmed cases until 1 August 2018. In these outbreaks, and in most other cases, the contagion was related to the importation of the virus, especially from Venezuela. This was confirmed by the genotype of the virus (D8) that was identified, which is the same that circulates in Venezuela.
Southeast Asia
In the Vietnamese measles epidemic in spring of 2014, an estimated 8,500 measles cases were reported as of 19 April, with 114 fatalities; as of 30 May, 21,639 suspected measles cases had been reported, with 142 measles-related fatalities. In the Naga Self-Administered Zone in a remote northern region of Myanmar, at least 40 children died during a measles outbreak in August 2016 that was probably caused by lack of vaccination in an area of poor health infrastructure. Following the 2019 Philippines measles outbreak, 23,563 measles cases have been reported in the country with 338 fatalities. A measles outbreak also happened among the Malaysian Orang Asli sub-group of Batek people in the state of Kelantan from May 2019, causing the deaths of 15 from the tribe. In 2024, a measles outbreak was declared in the Bangsamoro region in the Philippines with at least 592 cases and 3 deaths.
South Pacific
A measles outbreak in New Zealand has 2193 confirmed cases and two deaths. A measles outbreak in Tonga has 612 cases of measles.
Samoa
A measles outbreak in Samoa in late 2019 has over 5,700 cases of measles and 83 deaths, out of a Samoan population of 200,000. Over three percent of the population were infected, and a state of emergency was declared from 17 November to 7 December. A vaccination campaign brought the measles vaccination rate from 31 to 34% in 2018 to an estimated 94% of the eligible population in December 2019.
Africa
The Democratic Republic of the Congo and Madagascar have reported the highest numbers of cases in 2019. However, cases have decreased in Madagascar as a result of nationwide emergency measles vaccine campaigns. As of August 2019 outbreaks were occurring in Angola, Cameroon, Chad, Nigeria, South Sudan and Sudan.
Madagascar
An outbreak of measles in 2018 has resulted in well beyond 115,000 cases and over 1,200 deaths.
Democratic Republic of Congo
An outbreak of measles with nearly 5,000 deaths and 250,000 infections occurred in 2019, after the disease spread to all the provinces in the country. Most deaths were among children under five years of age. The World Health Organization (WHO) has reported this as the world's largest and fastest-moving epidemic.
History
Measles is of zoonotic origin, having evolved from rinderpest, which infects cattle. A precursor of the measles began causing infections in humans as early as the 4th century BC or as late as after 500 AD. The Antonine Plague of 165–180 AD has been speculated to have been measles, but the actual cause of this plague is unknown and smallpox is a more likely cause. The first systematic description of measles, and its distinction from smallpox and chickenpox, is credited to the Persian physician Muhammad ibn Zakariya al-Razi (860–932), who published The Book of Smallpox and Measles. At the time of Razi's book, it is believed that outbreaks were still limited and that the virus was not fully adapted to humans. Sometime between 1100 and 1200 AD, the measles virus fully diverged from rinderpest, becoming a distinct virus that infects humans. This agrees with the observation that measles requires a susceptible population of over 500,000 to sustain an epidemic, a situation that occurred in historic times following the growth of medieval European cities.
Measles is an endemic disease, meaning it has been continually present in a community and many people develop resistance. In populations not exposed to measles, exposure to the new disease can be devastating. In 1529, a measles outbreak in Cuba killed two-thirds of those indigenous people who had previously survived smallpox. Two years later, measles was responsible for the deaths of half the population of Honduras, and it has ravaged Mexico, Central America, and the Inca civilization.
Between roughly 1855 and 2005, measles is estimated to have killed about 200 million people worldwide.
The 1846 measles outbreak in the Faroe Islands was unusual for being well studied. Measles had not been seen on the islands for 60 years, so almost no residents had any acquired immunity. Three-quarters of the residents got sick, and more than 100 (1–2%) died from it before the epidemic burned itself out. Peter Ludvig Panum observed the outbreak and determined that measles was spread through direct contact of contagious people with people who had never had measles.
Measles killed 20 percent of Hawaii's population in the 1850s. In 1875, measles killed over 40,000 Fijians, approximately one-third of the population. In the 19th century, the disease killed more than half of the Great Andamanese population. Seven to eight million children are thought to have died from measles each year before the vaccine was introduced.
In 1914, a statistician for the Prudential Insurance Company estimated from a survey of 22 countries that 1% of all deaths in the temperate zone were caused by measles. He observed also that 1–6% of cases of measles ended fatally, the difference depending on age (0–3 being the worst), social conditions (e.g. overcrowded tenements) and pre-existing health conditions.
In 1954, the virus causing the disease was isolated from a 13-year-old boy from the United States, David Edmonston, and adapted and propagated on chick embryo tissue culture. The World Health Organization recognizes eight clades, named A, B, C, D, E, F, G, and H. Twenty-three strains of the measles virus have been identified and designated within these clades. While at Merck, Maurice Hilleman developed the first successful vaccine. Licensed vaccines to prevent the disease became available in 1963. An improved measles vaccine became available in 1968. Measles as an endemic disease was eliminated from the United States in 2000, but continues to be reintroduced by international travelers. In 2019 there were at least 1,241 cases of measles in the United States distributed across 31 states, with over three quarters in New York.
Society and culture
German anti-vaccination campaigner and HIV/AIDS denialist Stefan Lanka posed a challenge on his website in 2011, offering a sum of €100,000 for anyone who could scientifically prove that measles is caused by a virus and determine the diameter of the virus. He posited that the illness is psychosomatic and that the measles virus does not exist. When provided with overwhelming scientific evidence from various medical studies by German physician David Bardens, Lanka did not accept the findings, forcing Bardens to appeal in court. The initial legal case ended with the ruling that Lanka was to pay the prize. However, on appeal, Lanka was ultimately not required to pay the award because the submitted evidence did not meet his exact requirements. The case received wide international coverage that prompted many to comment on it, including neurologist, well-known skeptic and science-based medicine advocate Steven Novella, who called Lanka "a crank".
As outbreaks easily occur in under-vaccinated populations, the disease is seen as a test of sufficient vaccination within a population. Measles outbreaks have been on the rise in the United States, especially in communities with lower rates of vaccination. A different vaccine distribution within a single territory by age or social class may define different general perceptions of vaccination efficacy. It is often introduced to a region by travelers from other countries and it typically spreads to those who have not received the measles vaccination.
Alternative names
Other names include morbilli, rubeola, red measles, and English measles.
Research
In May 2015, the journal Science published a report in which researchers found that the measles infection can leave a population at increased risk for mortality from other diseases for two to three years. Results from additional studies that show the measles virus can kill cells that make antibodies were published in November 2019.
A specific drug treatment for measles, ERDRP-0519, has shown promising results in animal studies, but has not yet been tested in humans.
| Biology and health sciences | Infectious disease | null |
58921 | https://en.wikipedia.org/wiki/Natural%20disaster | Natural disaster | A natural disaster is the very harmful impact on a society or community after a natural hazard event. Some examples of natural hazard events include avalanches, droughts, earthquakes, floods, heat waves, landslides, tropical cyclones, volcanic activity and wildfires. Additional natural hazards include blizzards, dust storms, firestorms, hails, ice storms, sinkholes, thunderstorms, tornadoes and tsunamis. A natural disaster can cause loss of life or damage property. It typically causes economic damage. How bad the damage is depends on how well people are prepared for disasters and how strong the buildings, roads, and other structures are. Scholars have been saying that the term natural disaster is unsuitable and should be abandoned. Instead, the simpler term disaster could be used. At the same time the type of hazard would be specified. A disaster happens when a natural or human-made hazard impacts a vulnerable community. It results from the combination of the hazard and the exposure of a vulnerable society.
Nowadays it is hard to distinguish between natural and human-made disasters. The term natural disaster was already challenged in 1976. Human choices in architecture, fire risk, and resource management can cause or worsen natural disasters. Climate change also affects how often disasters due to extreme weather hazards happen. These "climate hazards" are floods, heat waves, wildfires, tropical cyclones, and the like.
Some things can make natural disasters worse. Examples are inadequate building norms, marginalization of people and poor choices on land use planning. Many developing countries do not have proper disaster risk reduction systems. This makes them more vulnerable to natural disasters than high income countries. An adverse event only becomes a disaster if it occurs in an area with a vulnerable population.
Terminology
A natural disaster is the highly harmful impact on a society or community following a natural hazard event. The term "disaster" itself is defined as follows: "Disasters are serious disruptions to the functioning of a community that exceed its capacity to cope using its own resources. Disasters can be caused by natural, man-made and technological hazards, as well as various factors that influence the exposure and vulnerability of a community."
The US Federal Emergency Management Agency (FEMA) explains the relationship between natural disasters and natural hazards as follows: "Natural hazards and natural disasters are related but are not the same. A natural hazard is the threat of an event that will likely have a negative impact. A natural disaster is the negative impact following an actual occurrence of natural hazard in the event that it significantly harms a community. An example of the distinction between a natural hazard and a disaster is that an earthquake is the hazard which caused the 1906 San Francisco earthquake disaster.
A natural hazard is a natural phenomenon that might have a negative effect on humans and other animals, or the environment. Natural hazard events can be classified into two broad categories: geophysical and biological. Natural hazards can be provoked or affected by anthropogenic processes, e.g. land-use change, drainage and construction.
There are 18 natural hazards included in the National Risk Index of FEMA: avalanche, coastal flooding, cold wave, drought, earthquake, hail, heat wave, tropical cyclone, ice storm, landslide, lightning, riverine flooding, strong wind, tornado, tsunami, volcanic activity, wildfire, winter weather. In addition, there are also dust storms.
Critique
The term natural disaster has been called a misnomer already in 1976. A disaster is a result of a natural hazard impacting a vulnerable community. But disasters can be avoided. Earthquakes, droughts, floods, storms, and other events lead to disasters because of human action and inaction. Poor land and policy planning and deregulation can create worse conditions. They often involve development activities that ignore or fail to reduce the disaster risks. Nature alone is blamed for disasters even when disasters result from failures in development. Disasters also result from failure of societies to prepare. Examples for such failures include inadequate building norms, marginalization of people, inequities, overexploitation of resources, extreme urban sprawl and climate change.
Defining disasters as solely natural events has serious implications when it comes to understanding the causes of a disaster and the distribution of political and financial responsibility in disaster risk reduction, disaster management, compensation, insurance and disaster prevention. Using natural to describe disasters misleads people to think the devastating results are inevitable, out of our control, and are simply part of a natural process. Hazards (earthquakes, hurricanes, pandemics, drought etc.) are inevitable, but the impact they have on society is not.
Thus, the term natural disaster is unsuitable and should be abandoned in favor of the simpler term disaster, while also specifying the category (or type) of hazard.
Scale
By region and country
As of 2019, the countries with the highest share of disability-adjusted life years (DALY) lost due to natural disasters are Bahamas, Haiti, Zimbabwe and Armenia (probably mainly due to the Spitak Earthquake). The Asia-Pacific region is the world's most disaster prone region. A person in Asia-Pacific is five times more likely to be hit by a natural disaster than someone living in other regions.
Between 1995 and 2015, the greatest number of natural disasters occurred in America, China and India. In 2012, there were 905 natural disasters worldwide, 93% of which were weather-related disasters. Overall costs were US$170 billion and insured losses $70 billion. 2012 was a moderate year. 45% were meteorological (storms), 36% were hydrological (floods), 12% were climatological (heat waves, cold waves, droughts, wildfires) and 7% were geophysical events (earthquakes and volcanic eruptions). Between 1980 and 2011 geophysical events accounted for 14% of all natural catastrophes.
Developing countries often have ineffective communication systems as well as insufficient support for disaster risk reduction and emergency management. This makes them more vulnerable to natural disasters than high income countries.
Slow and rapid onset events
Natural hazards occur across different time scales as well as area scales. Tornadoes and flash floods are rapid onset events, meaning they occur with a short warning time and are short-lived. Slow onset events can also be very damaging, for example drought is a natural hazards that develops slowly, sometimes over years.
Impacts
A natural disaster may cause loss of life, injury or other health impacts, property damage, loss of livelihoods and services, social and economic disruption, or environmental damage.
On death rates
Globally, the total number of deaths from natural disasters has been reduced by 75% over the last 100 years, due to the increased development of countries, increased preparedness, better education, better methods, and aid from international organizations. Since the global population has grown over the same time period, the decrease in number of deaths per capita is larger, dropping to 6% of the original amount.
The death rate from natural disasters is highest in developing countries due to the lower quality of building construction, infrastructure, and medical facilities.
On the economy
Global economic losses due to extreme weather, climate and water events are increasing. Costs have increased sevenfold from the 1970s to the 2010s. Direct losses from disasters have averaged above US$330 billion annually between 2015 and 2021. Socio-economic factors have contributed to this trend of increasing losses, such as population growth and increased wealth. This shows that increased exposure is the most important driver of economic losses. However, part of these are also due to human-induced climate change.
On the environment
During emergencies such as natural disasters and armed conflicts more waste may be produced, while waste management is given low priority compared with other services. Existing waste management services and infrastructures can be disrupted, leaving communities with unmanaged waste and increased littering. Under these circumstances human health and the environment are often negatively impacted.
Natural disasters (e.g. earthquakes, tsunamis, hurricanes) have the potential to generate a significant amount of waste within a short period. Waste management systems can be out of action or curtailed, often requiring considerable time and funding to restore. For example, the tsunami in Japan in 2011 produced huge amounts of debris: estimates of 5 million tonnes of waste were reported by the Japanese Ministry of the Environment. Some of this waste, mostly plastic and styrofoam washed up on the coasts of Canada and the United States in late 2011. Along the west coast of the United States, this increased the amount of litter by a factor of 10 and may have transported alien species. Storms are also important generators of plastic litter. A study by Lo et al. (2020) reported a 100% increase in the amount of microplastics on beaches surveyed following a typhoon in Hong Kong in 2018.
A significant amount of plastic waste can be produced during disaster relief operations. Following the 2010 earthquake in Haiti, the generation of waste from relief operations was referred to as a "second disaster". The United States military reported that millions of water bottles and styrofoam food packages were distributed although there was no operational waste management system. Over 700,000 plastic tarpaulins and 100,000 tents were required for emergency shelters. The increase in plastic waste, combined with poor disposal practices, resulted in open drainage channels being blocked, increasing the risk of disease.
Conflicts can result in large-scale displacement of communities. People living under these conditions are often provided with minimal waste management facilities. Burn pits are widely used to dispose of mixed wastes, including plastics. Air pollution can lead to respiratory and other illnesses. For example, Sahrawi refugees have been living in five camps near Tindouf, Algeria for nearly 45 years. As waste collection services are underfunded and there is no recycling facility, plastics have flooded the camps' streets and surroundings. In contrast, the Azraq camp in Jordan for refugees from Syria has waste management services; of 20.7 tonnes of waste produced per day, 15% is recyclable.
On women and vulnerable populations
Because of the social, political and cultural context of many places throughout the world, women are often disproportionately affected by disaster. In the 2004 Indian Ocean tsunami, more women died than men, partly due to the fact that fewer women knew how to swim. During and after a natural disaster, women are at increased risk of being affected by gender based violence and are increasingly vulnerable to sexual violence. Disrupted police enforcement, lax regulations, and displacement all contribute to increased risk of gender based violence and sexual assault.
In addition to LGBT people and immigrants, women are also disproportionately victimized by religion-based scapegoating for natural disasters: fanatical religious leaders or adherents may claim that a god or gods are angry with women's independent, freethinking behavior, such as dressing 'immodestly', having sex or abortions. For example, Hindutva party Hindu Makkal Katchi and others blamed women's struggle for the right to enter the Sabarimala temple for the August 2018 Kerala floods, purportedly inflicted by the angry god Ayyappan.
During and after natural disasters, routine health behaviors become interrupted. In addition, health care systems may have broken down as a result of the disaster, further reducing access to contraceptives. Unprotected intercourse during this time can lead to increased rates of childbirth, unintended pregnancies and sexually transmitted infections (STIs).
Pregnant women are one of the groups disproportionately affected by natural disasters. Inadequate nutrition, little access to clean water, lack of health-care services and psychological stress in the aftermath of the disaster can lead to a significant increase in maternal morbidity and mortality. Furthermore, shortage of healthcare resources during this time can convert even routine obstetric complications into emergencies.
Once a vulnerable population has experienced a disaster, the community can take many years to repair and that repair period can lead to further vulnerability. The disastrous consequences of natural disaster also affect the mental health of affected communities, often leading to post-traumatic symptoms. These increased emotional experiences can be supported through collective processing, leading to resilience and increased community engagement.
On governments and voting processes
Disasters stress government capacity, as the government tries to conduct routine as well as emergency operations. Some theorists of voting behavior propose that citizens update information about government effectiveness based on their response to disasters, which affects their vote choice in the next election. Indeed, some evidence, based on data from the United States, reveals that incumbent parties can lose votes if citizens perceives them as responsible for a poor disaster response or gain votes based on perceptions of well-executed relief work. The latter study also finds, however, that voters do not reward incumbent parties for disaster preparedness, which may end up affecting government incentives to invest in such preparedness.
Disasters caused by geological hazards
Landslides
Avalanches
Earthquakes
An earthquake is the result of a sudden release of energy in the Earth's crust that creates seismic waves. At the Earth's surface, earthquakes manifest themselves by vibration, shaking, and sometimes displacement of the ground. Earthquakes are caused by slippage within geological faults. The underground point of origin of the earthquake is called the seismic focus. The point directly above the focus on the surface is called the epicenter. Earthquakes by themselves rarely kill people or wildlife – it is usually the secondary events that they trigger, such as building collapse, fires, tsunamis and volcanic eruptions, that cause death. Many of these can possibly be avoided by better construction, safety systems, early warning and planning.
Sinkholes
A sinkhole is a depression or hole in the ground caused by some form of collapse of the surface layer. When natural erosion, human mining or underground excavation makes the ground too weak to support the structures built on it, the ground can collapse and produce a sinkhole.
Coastal erosion
Coastal erosion is a physical process by which shorelines in coastal areas around the world shift and change, primarily in response to waves and currents that can be influenced by tides and storm surge. Coastal erosion can result from long-term processes (see also beach evolution) as well as from episodic events such as tropical cyclones or other severe storm events. Coastal erosion is one of the most significant coastal hazards. It forms a threat to infrastructure, capital assets and property.
Volcanic eruptions
Volcanoes can cause widespread destruction and consequent disaster in several ways. One hazard is the volcanic eruption itself, with the force of the explosion and falling rocks able to cause harm. Lava may also be released during the eruption of a volcano; as it leaves the volcano, it can destroy buildings, plants and animals due to its extreme heat. In addition, volcanic ash may form a cloud (generally after cooling) and settle thickly in nearby locations. When mixed with water, this forms a concrete-like material. In sufficient quantities, ash may cause roofs to collapse under its weight. Even small quantities will harm humans if inhaled – it has the consistency of ground glass and therefore causes laceration to the throat and lungs. Volcanic ash can also cause abrasion damage to moving machinery such as engines. The main killer of humans in the immediate surroundings of a volcanic eruption is pyroclastic flows, consisting of a cloud of hot ash which builds up in the air above the volcano and rushes down the slopes when the eruption no longer supports the lifting of the gases. It is believed that Pompeii was destroyed by a pyroclastic flow. A lahar is a volcanic mudflow or landslide. The 1953 Tangiwai disaster was caused by a lahar, as was the 1985 Armero tragedy in which the town of Armero was buried and an estimated 23,000 people were killed.
Volcanoes rated at 8 (the highest level) on the volcanic explosivity index are known as supervolcanoes. According to the Toba catastrophe theory, 75,000 to 80,000 years ago, a supervolcanic eruption at what is now Lake Toba in Sumatra reduced the human population to 10,000 or even 1,000 breeding pairs, creating a bottleneck in human evolution, and killed three-quarters of all plant life in the northern hemisphere. However, there is considerable debate regarding the veracity of this theory. The main danger from a supervolcano is the immense cloud of ash, which has a disastrous global effect on climate and temperature for many years.
Tsunami
A tsunami (plural: tsunamis or tsunami; from Japanese: 津波, lit. "harbour wave"; English pronunciation: /tsuːˈnɑːmi/), also known as a seismic sea wave or tidal wave, is a series of waves in a water body caused by the displacement of a large volume of water, generally in an ocean or a large lake. Tsunamis can be caused by undersea earthquakes such as the 2004 Boxing Day tsunami, or by landslides such as the one in 1958 at Lituya Bay, Alaska, or by volcanic eruptions such as the ancient eruption of Santorini. On March 11, 2011, a tsunami occurred near Fukushima, Japan and spread through the Pacific Ocean.
Disasters caused by extreme weather hazards
Some of the 18 natural hazards included in the National Risk Index of FEMA now have a higher probability of occurring, and at higher intensity, due to the effects of climate change. This applies to heat waves, droughts, wildfire and coastal flooding.
Hot and dry conditions
Heat waves
A heat wave is a period of unusually and excessively hot weather. Heat waves are rare and require specific combinations of weather events to take place, and may include temperature inversions, katabatic winds, or other phenomena. The worst heat wave in recent history was the European Heat Wave of 2003. The 2010 Northern Hemisphere summer resulted in severe heat waves which killed over 2,000 people. The heat caused hundreds of wildfires which led to widespread air pollution and burned thousands of square kilometers of forest.
Droughts
Well-known historical droughts include the 1997–2009 Millennium Drought in Australia which led to a water supply crisis across much of the country. As a result, many desalination plants were built for the first time (see list). In 2011, the State of Texas lived under a drought emergency declaration for the entire calendar year and suffered severe economic losses. The drought caused the Bastrop fires.
Duststorms
Firestorms
Wildfires
Wildfires are large fires which often start in wildland areas. Common causes include lightning and drought but wildfires may also be started by human negligence or arson. They can spread to populated areas and thus be a threat to humans and property, as well as wildlife. One example for a notable wildfire is the 1871 Peshtigo Fire in the United States, which killed at least 1700 people. Another one is the 2009 Victorian bushfires in Australia (collectively known as "Black Saturday bushfires"). In that year, a summer heat wave in Victoria, Australia, created conditions which fueled the massive bushfires in 2009. Melbourne experienced three days in a row of temperatures exceeding 40 °C (104 °F), with some regional areas sweltering through much higher temperatures.
Storms and heavy rain
Floods
A flood is an overflow of water that 'submerges' land. The EU Floods Directive defines a flood as a temporary covering of land that is usually dry with water. In the sense of 'flowing water', the word may also be applied to the inflow of the tides. Flooding may result from the volume of a body of water, such as a river or lake, becoming higher than usual, causing some of the water to escape its usual boundaries. While the size of a lake or other body of water will vary with seasonal changes in precipitation and snow melt, a flood is not considered significant unless the water covers land used by humans, such as a village, city or other inhabited area, roads or expanses of farmland.
Thunderstorms
Severe storms, dust clouds and volcanic eruptions can generate lightning. Apart from the damage typically associated with storms, such as winds, hail and flooding, the lightning itself can damage buildings, ignite fires and kill by direct contact. Most deaths from lightning occur in the poorer countries of the Americas and Asia, where lightning is common and adobe mud brick housing provides little protection.
Tropical cyclone
Typhoon, cyclone, cyclonic storm and hurricane are different names for the same phenomenon: a tropical storm that forms over an ocean. It is caused by evaporated water that comes off of the ocean and becomes a storm. It is characterized by strong winds, heavy rainfall and thunderstorms. The determining factor on which term is used is based on where the storm originates. In the Atlantic and Northeast Pacific, the term "hurricane" is used; in the Northwest Pacific, it is referred to as a "typhoon"; a "cyclone" occurs in the South Pacific and Indian Ocean.
The deadliest hurricane ever was the 1970 Bhola cyclone; the deadliest Atlantic hurricane was the Great Hurricane of 1780, which devastated Martinique, St. Eustatius and Barbados. Another notable hurricane is Hurricane Katrina, which devastated the Gulf Coast of the United States in 2005. Hurricanes may become more intense and produce more heavy rainfall as a consequence of human-induced climate change.
Tornadoes
A tornado is a violent and dangerous rotating column of air that is in contact with both the surface of the Earth and a cumulonimbus cloud, or, in rare cases, the base of a cumulus cloud. It is also referred to as a twister or a cyclone, although the word cyclone is used in meteorology in a wider sense to refer to any closed low pressure circulation. Tornadoes come in many shapes and sizes but typically take the form of a visible condensation funnel, the narrow end of which touches the Earth and is often encircled by a cloud of debris and dust. Tornadoes can occur one at a time, or can occur in large tornado outbreaks associated with supercells or in other large areas of thunderstorm development.
Most tornadoes have wind speeds of less than , are approximately across, and travel a few kilometers before dissipating. The most extreme tornadoes can attain wind speeds of more than , attain a width exceeding across, and stay on the ground for perhaps more than .
Cold-weather events
Blizzards
Blizzards are severe winter storms characterized by heavy snow and strong winds. When high winds stir up snow that has already fallen, it is known as a ground blizzard. Blizzards can impact local economic activities, especially in regions where snowfall is rare. The Great Blizzard of 1888 affected the United States, when many tons of wheat crops were destroyed. In Asia, the 1972 Iran blizzard and the 2008 Afghanistan blizzard, were the deadliest blizzards in history; in the former, an area the size of Wisconsin was entirely buried in snow. The 1993 Superstorm originated in the Gulf of Mexico and traveled north, causing damage in 26 American states as well as in Canada and leading to more than 300 deaths.
Hailstorms
Hail is precipitation in the form of ice that does not melt before it hits the ground. Hailstorms are produced by thunderstorms. Hailstones usually measure between in diameter. These can damage the location in which they fall. Hailstorms can be especially devastating to farm fields, ruining crops and damaging equipment. A particularly damaging hailstorm hit Munich, Germany, on July 12, 1984, causing about $2 billion in insurance claims.
Multi-hazard analysis
Each of the natural hazard types outlined above have very different characteristics, in terms of the spatial and temporal scales they influence, hazard frequency and return period, and measures of intensity and impact. These complexities result in "single-hazard" assessments being commonplace, where the hazard potential from one particular hazard type is constrained. In these examples, hazards are often treated as isolated or independent. An alternative is a "multi-hazard" approach which seeks to identify all possible natural hazards and their interactions or interrelationships.
Many examples exist of one natural hazard triggering or increasing the probability of one or more other natural hazards. For example, an earthquake may trigger landslides, whereas a wildfire may increase the probability of landslides being generated in the future. A detailed review of such interactions across 21 natural hazards identified 90 possible interactions, of varying likelihood and spatial importance. There may also be interactions between these natural hazards and anthropic processes. For example, groundwater abstraction may trigger groundwater-related subsidence.
Effective hazard analysis in any given area (e.g., for the purposes of disaster risk reduction) should ideally include an examination of all relevant hazards and their interactions. To be of most use for risk reduction, hazard analysis should be extended to risk assessment wherein the vulnerability of the built environment to each of the hazards is taken into account. This step is well developed for seismic risk, where the possible effect of future earthquakes on structures and infrastructure is assessed, as well as for risk from extreme wind and to a lesser extent flood risk. For other types of natural hazard the calculation of risk is more challenging, principally because of the lack of functions linking the intensity of a hazard and the probability of different levels of damage (fragility curves).
Responses
Disaster management is a main function of civil protection (or civil defense) authorities. It should address all four of the phases of disasters: mitigation and prevention, disaster response, recovery and preparedness.
Mitigation and prevention
Disaster risk reduction
Response
Recovery
Preparedness
Society and culture
International law
The 1951 Refugee Convention and its 1967 Protocol are the cornerstone documents for refugee protection and population displacement. The 1998 UN Guiding Principles on Internal Displacement and 2009 Kampala Convention protect people displaced due to natural disasters.
| Physical sciences | Natural disasters | null |
58937 | https://en.wikipedia.org/wiki/Diphtheria | Diphtheria | Diphtheria is an infection caused by the bacterium Corynebacterium diphtheriae. Most infections are asymptomatic or have a mild clinical course, but in some outbreaks, the mortality rate approaches 10%. Signs and symptoms may vary from mild to severe, and usually start two to five days after exposure. Symptoms often develop gradually, beginning with a sore throat and fever. In severe cases, a grey or white patch develops in the throat, which can block the airway, and create a barking cough similar to what is observed in croup. The neck may also swell, in part due to the enlargement of the facial lymph nodes. Diphtheria can also involve the skin, eyes, or genitals, and can cause complications, including myocarditis (which in itself can result in an abnormal heart rate), inflammation of nerves (which can result in paralysis), kidney problems, and bleeding problems due to low levels of platelets.
Diphtheria is usually spread between people by direct contact, through the air, or through contact with contaminated objects. Asymptomatic transmission and chronic infection are also possible. Different strains of C. diphtheriae are the main cause in the variability of lethality, as the lethality and symptoms themselves are caused by the exotoxin produced by the bacteria. Diagnosis can often be made based on the appearance of the throat with confirmation by microbiological culture. Previous infection may not protect against reinfection.
A diphtheria vaccine is effective for prevention, and is available in a number of formulations. Three or four doses, given along with tetanus vaccine and pertussis vaccine, are recommended during childhood. Further doses of the diphtheria–tetanus vaccine are recommended every ten years. Protection can be verified by measuring the antitoxin level in the blood. Diphtheria can be prevented in those exposed, as well as treated with the antibiotics erythromycin or benzylpenicillin. In severe cases a tracheotomy may be needed to open the airway.
In 2015, 4,500 cases were officially reported worldwide, down from nearly 100,000 in 1980. About a million cases a year are believed to have occurred before the 1980s. Diphtheria currently occurs most often in sub-Saharan Africa, South Asia, and Indonesia. In 2015, it resulted in 2,100 deaths, down from 8,000 deaths in 1990. In areas where it is still common, children are most affected. It is rare in the developed world due to widespread vaccination, but can re-emerge if vaccination rates decrease. In the United States, 57 cases were reported between 1980 and 2004. Death occurs in 5–10% of those diagnosed. The disease was first described in the 5th century BC by Hippocrates. The bacterium was identified in 1882 by Edwin Klebs.
Signs and symptoms
The symptoms of diphtheria usually begin two to seven days after infection. They include fever of 38 °C (100.4 °F) or above; chills; fatigue; bluish skin coloration (cyanosis); sore throat; hoarseness; cough; headache; difficulty swallowing; painful swallowing; difficulty breathing; rapid breathing; foul-smelling and bloodstained nasal discharge; and lymphadenopathy. Within two to three days, diphtheria may destroy healthy tissues in the respiratory system. The dead tissue forms a thick, gray coating that can build up in the throat or nose. This thick gray coating is called a "pseudomembrane." It can cover tissues in the nose, tonsils, voice box, and throat, making it very hard to breathe and swallow. Symptoms can also include cardiac arrhythmias, myocarditis, and cranial and peripheral nerve palsies.
Diphtheritic croup
Laryngeal diphtheria can lead to a characteristic swollen neck and throat, or "bull neck." The swollen throat is often accompanied by a serious respiratory condition, characterized by a brassy or "barking" cough, stridor, hoarseness, and difficulty breathing; and historically referred to variously as "diphtheritic croup," "true croup," or sometimes simply as "croup." Diphtheritic croup is extremely rare in countries where diphtheria vaccination is customary. As a result, the term "croup" nowadays most often refers to an unrelated viral illness that produces similar but milder respiratory symptoms.
Transmission
Human-to-human transmission of diphtheria typically occurs through the air when an infected individual coughs or sneezes. Breathing in particles released from the infected individual leads to infection. Contact with any lesions on the skin can also lead to transmission of diphtheria, but this is uncommon. Indirect infections can occur, as well. If an infected individual touches a surface or object, the bacteria can be left behind and remain viable. Also, some evidence indicates diphtheria has the potential to be zoonotic, but this has yet to be confirmed. Corynebacterium ulcerans has been found in some animals, which would suggest zoonotic potential.
Mechanism
Diphtheria toxin (DT) is produced only by C. diphtheriae infected with a certain type of bacteriophage. Toxinogenicity is determined by phage conversion (also called lysogenic conversion); i.e., the ability of the bacterium to make DT changes as a consequence of infection by a particular phage. DT is encoded by the tox gene. Strains of corynephage are either tox+ (e.g., corynephage β) or tox− (e.g., corynephage γ). The tox gene becomes integrated into the bacterial genome. The chromosome of C. diphtheriae has two different but functionally equivalent bacterial attachment sites (attB) for integration of β prophage into the chromosome.
The diphtheria toxin precursor is a protein of molecular weight 60 kDa. Certain proteases, such as trypsin, selectively cleave DT to generate two peptide chains, amino-terminal fragment A (DT-A) and carboxyl-terminal fragment B (DT-B), which are held together by a disulfide bond. DT-B is a recognition subunit that gains entry of DT into the host cell by binding to the EGF-like domain of heparin-binding EGF-like growth factor on the cell surface. This signals the cell to internalize the toxin within an endosome via receptor-mediated endocytosis. Inside the endosome, DT is split by a trypsin-like protease into DT-A and DT-B. The acidity of the endosome causes DT-B to create pores in the endosome membrane, thereby catalysing the release of DT-A into the cytoplasm.
Fragment A inhibits the synthesis of new proteins in the affected cell by catalyzing ADP-ribosylation of elongation factor EF-2—a protein that is essential to the translation step of protein synthesis. This ADP-ribosylation involves the transfer of an ADP-ribose from NAD+ to a diphthamide (a modified histidine) residue within the EF-2 protein. Since EF-2 is needed for the moving of tRNA from the A-site to the P-site of the ribosome during protein translation, ADP-ribosylation of EF-2 prevents protein synthesis.
ADP-ribosylation of EF-2 is reversed by giving high doses of nicotinamide (a form of vitamin B3), since this is one of the reaction's end products, and high amounts drive the reaction in the opposite direction.
Diagnosis
The current clinical case definition of diphtheria used by the United States' Centers for Disease Control and Prevention is based on both laboratory and clinical criteria.
Laboratory criteria
Isolation of C. diphtheriae from a Gram stain or throat culture from a clinical specimen.
Histopathologic diagnosis of diphtheria by Albert's stain.
Toxin demonstration
In vivo tests (guinea pig inoculation): Subcutaneous and intracutaneous tests.
In vitro test: Elek's gel precipitation test, detection of tox gene by PCR, ELISA, ICA.
Clinical criteria
Upper respiratory tract illness with sore throat.
Low-grade fever (above is rare).
An adherent, dense, grey pseudomembrane covering the posterior aspect of the pharynx; in severe cases, it can extend to cover the entire tracheobronchial tree.
Case classification
Probable: a clinically compatible case that is not laboratory-confirmed, and is not epidemiologically linked to a laboratory-confirmed case.
Confirmed: a clinically compatible case that is either laboratory-confirmed or epidemiologically linked to a laboratory-confirmed case.
Empirical treatment should generally be started in a patient in whom suspicion of diphtheria is high.
Prevention
Vaccination against diphtheria is commonly done in infants, and delivered as a combination vaccine, such as a DPT vaccine (diphtheria, pertussis, tetanus). Pentavalent vaccines, which vaccinate against diphtheria and four other childhood diseases simultaneously, are frequently used in disease prevention programs in developing countries by organizations such as UNICEF.
Treatment
The disease may remain manageable, but in more severe cases, lymph nodes in the neck may swell, and breathing and swallowing are more difficult. People in this stage should seek immediate medical attention, as obstruction in the throat may require intubation or a tracheotomy. Abnormal cardiac rhythms can occur early in the course of the illness or weeks later, and can lead to heart failure. Diphtheria can also cause paralysis in the eye, neck, throat, or respiratory muscles. Patients with severe cases are put in a hospital intensive care unit, and given diphtheria antitoxin (consisting of antibodies isolated from the serum of horses that have been challenged with diphtheria toxin). Since antitoxin does not neutralize toxin that is already bound to tissues, delaying its administration increases risk of death. Therefore, the decision to administer diphtheria antitoxin is based on clinical diagnosis, and should not await laboratory confirmation.
Antibiotics have not been demonstrated to affect healing of local infection in diphtheria patients treated with antitoxin. Antibiotics are used in patients or carriers to eradicate C. diphtheriae, and prevent its transmission to others. The Centers for Disease Control and Prevention (CDC) recommends either:
Metronidazole
Erythromycin is given (orally or by injection) for 14 days (40 mg/kg per day with a maximum of 2 g/d), or
Procaine penicillin G is given intramuscularly for 14 days (300,000 U/d for patients weighing <10 kg, and 600,000 U/d for those weighing >10 kg); patients with allergies to penicillin G or erythromycin can use rifampin or clindamycin.
In cases that progress beyond a throat infection, diphtheria toxin spreads through the blood, and can lead to potentially life-threatening complications that affect other organs, such as the heart and kidneys. Damage to the heart caused by the toxin affects the heart's ability to pump blood, or the kidneys' ability to clear wastes. It can also cause nerve damage, eventually leading to paralysis. About 40–50% of those left untreated can die.
Epidemiology
Diphtheria is fatal in 5–10% of cases. In children under five years and adults over 40 years, the fatality rate may be as much as 20%. In 2013, it resulted in 3,300 deaths, down from 8,000 deaths in 1990. Better standards of living, mass immunization, improved diagnosis, prompt treatment, and more effective health care have led to a decrease in cases worldwide.
History
In 1613, Spain experienced an epidemic of diphtheria, referred to as (The Year of Strangulations).
In 1705, the Mariana Islands experienced an epidemic of diphtheria and typhus simultaneously, reducing the population to about 5,000 people.
In 1735, a diphtheria epidemic swept through New England.
Before 1826, diphtheria was known by different names across the world. In England, it was known as "Boulogne sore throat," as the illness had spread from France. In 1826, Pierre Bretonneau gave the disease the name diphthérite (from Greek διφθέρα, diphthera 'leather'), describing the appearance of pseudomembrane in the throat.
In 1856, Victor Fourgeaud described an epidemic of diphtheria in California.
In 1878, Princess Alice (Queen Victoria's second daughter) and her family became infected with diphtheria; Princess Alice and her four-year-old daughter, Princess Marie, both died.
In 1883, Edwin Klebs identified the bacterium causing diphtheria, and named it Klebs–Loeffler bacterium. The club shape of this bacterium helped Edwin to differentiate it from other bacteria. Over time, it has been called Microsporon diphtheriticum, Bacillus diphtheriae, and Mycobacterium diphtheriae. Current nomenclature is Corynebacterium diphtheriae.
In 1884, German bacteriologist Friedrich Loeffler became the first person to cultivate C. diphtheriae. He used Koch's postulates to prove association between C. diphtheriae and diphtheria. He also showed that the bacillus produces an exotoxin.
In 1885, Joseph P. O'Dwyer introduced the O'Dwyer tube for laryngeal intubation in patients with an obstructed larynx. It soon replaced tracheostomy as the emergency diphtheric intubation method.
In 1888, Emile Roux and Alexandre Yersin showed that a substance produced by C. diphtheriae caused symptoms of diphtheria in animals.
In 1890, Shibasaburō Kitasato and Emil von Behring immunized guinea pigs with heat-treated diphtheria toxin. They also immunized goats and horses in the same way, and showed that an "antitoxin" made from serum of immunized animals could cure the disease in non-immunized animals. Behring used this antitoxin (now known to consist of antibodies that neutralize the toxin produced by C. diphtheriae) for human trials in 1891, but they were unsuccessful. Successful treatment of human patients with horse-derived antitoxin began in 1894, after production and quantification of antitoxin had been optimized. In 1901, Von Behring won the first Nobel Prize in medicine for his work on diphtheria.
In 1895, H. K. Mulford Company of Philadelphia started production and testing of diphtheria antitoxin in the United States. Park and Biggs described the method for producing serum from horses for use in diphtheria treatment.
In 1897, Paul Ehrlich developed a standardized unit of measure for diphtheria antitoxin. This was the first ever standardization of a biological product, and played an important role in future developmental work on sera and vaccines.
In 1901, 10 of 11 inoculated St. Louis children died from contaminated diphtheria antitoxin. The horse from which the antitoxin was derived died of tetanus. This incident, coupled with a tetanus outbreak in Camden, New Jersey, played an important part in initiating federal regulation of biologic products.
On 7 January 1904, Ruth Cleveland died of diphtheria at the age of 12 years in Princeton, New Jersey. Ruth was the eldest daughter of former President Grover Cleveland and the former First Lady, Frances Folsom.
In 1905, Franklin Royer, from Philadelphia's Municipal Hospital, published a paper urging timely treatment for diphtheria and adequate doses of antitoxin. In 1906, Clemens Pirquet and Béla Schick described serum sickness in children receiving large quantities of horse-derived antitoxin.
Between 1910 and 1911, Béla Schick developed the Schick test to detect pre-existing immunity to diphtheria in an exposed person. Only those who had not been exposed to diphtheria were vaccinated. A massive, five-year campaign was coordinated by Dr. Schick. As a part of the campaign, 85 million pieces of literature were distributed by the Metropolitan Life Insurance Company, with an appeal to parents to "Save your child from diphtheria." A vaccine was developed in the next decade, and deaths began declining significantly in 1924.
In 1919, in Dallas, Texas, 10 children were killed and 60 others made seriously ill by toxic antitoxin which had passed the tests of the New York State Health Department. The manufacturer of the antitoxin, the Mulford Company of Philadelphia, paid damages in every case.
During the 1920s, an annual estimate of 100,000 to 200,000 diphtheria cases and 13,000 to 15,000 deaths occurred in the United States. Children represented a large majority of these cases and fatalities. One of the most infamous outbreaks of diphtheria occurred in 1925, in Nome, Alaska; the "Great Race of Mercy" to deliver diphtheria antitoxin is now celebrated by the Iditarod Trail Sled Dog Race.
In 1926, Alexander Thomas Glenny increased the effectiveness of diphtheria toxoid (a modified version of the toxin used for vaccination) by treating it with aluminum salts. Vaccination with toxoid was not widely used until the early 1930s. In 1939, Dr. Nora Wattie, who was the Principal Medical Officer (Maternity and Child Welfare) of Glasgow between 1934– 1964, introduced immunisation clinics across Glasgow, and promoted mother and child health education, resulting in virtual eradication of the infection in the city.
Widespread vaccination pushed cases in the United States down from 4.4 per 100,000 inhabitants in 1932 to 2.0 in 1937. In Nazi Germany, where authorities preferred treatment and isolation over vaccination (until about 1939–1941), cases rose over the same period from 6.1 to 9.6 per 100,000 inhabitants.
Between June 1942 and February 1943, 714 cases of diphtheria were recorded at Sham Shui Po Barracks, resulting in 112 deaths because the Imperial Japanese Army did not release supplies of anti-diphtheria serum.
In 1943, diphtheria outbreaks accompanied war and disruption in Europe. The 1 million cases in Europe resulted in 50,000 deaths.
During 1948 in Kyoto, 68 of 606 children died after diphtheria immunization due to improper manufacture of aluminum phosphate toxoid.
In 1974, the World Health Organization included DPT vaccine in their Expanded Programme on Immunization for developing countries.
In 1975, an outbreak of cutaneous diphtheria in Seattle, Washington, was reported.
After the breakup of the former Soviet Union in 1991, vaccination rates in its constituent countries fell so low that an explosion of diphtheria cases occurred. In 1991, 2,000 cases of diphtheria occurred in the USSR. Between 1991 and 1998, as many as 200,000 cases were reported in the Commonwealth of Independent States, and resulted in 5,000 deaths. In 1994, the Russian Federation had 39,703 diphtheria cases. By contrast, in 1990, only 1,211 cases were reported.
In early May 2010, a case of diphtheria was diagnosed in Port-au-Prince, Haiti, after the devastating 2010 Haiti earthquake. The 15-year-old male patient died while workers searched for antitoxin.
In 2013, three children died of diphtheria in Hyderabad, India.
In early June 2015, a case of diphtheria was diagnosed at Vall d'Hebron University Hospital in Barcelona, Spain. The six-year-old child who died of the illness had not been previously vaccinated due to parental opposition to vaccination. It was the first case of diphtheria in the country since 1986, as reported by the Spanish daily newspaper El Mundo, or from 1998, as reported by the WHO.
In March 2016, a three-year-old girl died of diphtheria in the University Hospital of Antwerp, Belgium.
In June 2016, a three-year-old, five-year-old, and seven-year-old girl died of diphtheria in Kedah, Malacca, and Sabah, Malaysia.
In January 2017, more than 300 cases were recorded in Venezuela.
In 2017, outbreaks occurred in a Rohingya refugee camp in Bangladesh, and amongst children unvaccinated due to the Yemeni Civil War.
In November and December 2017, an outbreak of diphtheria occurred in Indonesia, with more than 600 cases found and 38 fatalities.
In November 2019, two cases of diphtheria occurred in the Lothian area of Scotland. Additionally, in November 2019, an unvaccinated 8-year-old boy died of diphtheria in Athens, Greece.
In July 2022, two cases of diphtheria occurred in northern New South Wales, Australia.
In October 2022, there was an outbreak of diphtheria at the former Manston airfield, a former Ministry of Defence (MoD) site in Kent, England, which had been converted to an asylum seeker processing centre. The capacity of the processing centre was 1,000 people, although about 3,000 were living at the site, with some accommodated in tents. The Home Office, the government department responsible for asylum seekers, refused to confirm the number of cases.
In December 2023 there was an outbreak at a school in Luton, in the United Kingdom. UK Health Security Agency (UKHSA) issued a statement saying specialists have been providing public health support following confirmation of the diphtheria case at a primary school in Luton. The agency said it is working closely with local and national partners "to ensure all necessary public health measures are implemented" following the discovery of the new case. The statement added: "We have conducted a risk assessment and close contacts of the case have been identified and where appropriate, vaccination and advice will be given to prevent the spread of the infection."
| Biology and health sciences | Infectious disease | null |
58950 | https://en.wikipedia.org/wiki/Galaxy%20cluster | Galaxy cluster | A galaxy cluster, or a cluster of galaxies, is a structure that consists of anywhere from hundreds to thousands of galaxies that are bound together by gravity, with typical masses ranging from 1014 to 1015 solar masses. They are the second-largest known gravitationally bound structures in the universe after some superclusters (of which only one, the Shapley Supercluster, is known to be bound). They were believed to be the largest known structures in the universe until the 1980s, when superclusters were discovered. One of the key features of clusters is the intracluster medium (ICM). The ICM consists of heated gas between the galaxies and has a peak temperature between 2–15 keV that is dependent on the total mass of the cluster. Galaxy clusters should not be confused with galactic clusters (also known as open clusters), which are star clusters within galaxies, or with globular clusters, which typically orbit galaxies. Small aggregates of galaxies are referred to as galaxy groups rather than clusters of galaxies. The galaxy groups and clusters can themselves cluster together to form superclusters.
Notable galaxy clusters in the relatively nearby Universe include the Virgo Cluster, Fornax Cluster, Hercules Cluster, and the Coma Cluster. A very large aggregation of galaxies known as the Great Attractor, dominated by the Norma Cluster, is massive enough to affect the local expansion of the Universe. Notable galaxy clusters in the distant, high-redshift universe include SPT-CL J0546-5345 and SPT-CL J2106-5844, the most massive galaxy clusters found in the early Universe. In the last few decades, they are also found to be relevant sites of particle acceleration, a feature that has been discovered by observing non-thermal diffuse radio emissions, such as radio halos and radio relics. Using the Chandra X-ray Observatory, structures such as cold fronts and shock waves have also been found in many galaxy clusters.
Basic properties
Galaxy clusters typically have the following properties:
They contain 100 to 1,000 galaxies, hot X-ray emitting gas and large amounts of dark matter. Details are described in the "Composition" section.
The distribution of the three components is approximately the same in the cluster.
They have total masses of 1014 to 1015 solar masses.
They typically have a diameter from 1 to 5 Mpc (see 1023 m for distance comparisons).
The spread of velocities for the individual galaxies is about 800–1000 km/s.
Composition
There are three main components of a galaxy cluster. They are tabulated below:
Classification
Galaxy clusters are categorized as type I, II, or III based on morphology.
Galaxy clusters as measuring instruments
Gravitational redshift
Galaxy clusters have been used by Radek Wojtak from the Niels Bohr Institute at the University of Copenhagen to test predictions of general relativity: energy loss from light escaping a gravitational field. Photons emitted from the center of a galaxy cluster should lose more energy than photons coming from the edge of the cluster because gravity is stronger in the center. Light emitted from the center of a cluster has a longer wavelength than light coming from the edge. This effect is known as gravitational redshift. Using the data collected from 8000 galaxy clusters, Wojtak was able to study the properties of gravitational redshift for the distribution of galaxies in clusters. He found that the light from the clusters was redshifted in proportion to the distance from the center of the cluster as predicted by general relativity. The result also strongly supports the Lambda-Cold Dark Matter model of the Universe, according to which most of the cosmos is made up of Dark Matter that does not interact with matter.
Gravitational lensing
Galaxy clusters are also used for their strong gravitational potential as gravitational lenses to boost the reach of telescopes. The gravitational distortion of space-time occurs near massive galaxy clusters and bends the path of photons to create a cosmic magnifying glass. This can be done with photons of any wavelength from the optical to the X-ray band. The latter is more difficult, because galaxy clusters emit a lot of X-rays. However, X-ray emission may still be detected when combining X-ray data to optical data. One particular case is the use of the Phoenix galaxy cluster to observe a dwarf galaxy in its early high energy stages of star formation.
List
Gallery
Images
Videos
| Physical sciences | Basics_2 | Astronomy |
58968 | https://en.wikipedia.org/wiki/Observatory | Observatory | An observatory is a location used for observing terrestrial, marine, or celestial events. Astronomy, climatology/meteorology, geophysics, oceanography and volcanology are examples of disciplines for which observatories have been constructed.
The term observatoire has been used in French since at least 1976 to denote any institution that compiles and presents data on a particular subject (such as public health observatory) or for a particular geographic area (European Audiovisual Observatory).
Astronomical observatories
Astronomical observatories are mainly divided into four categories: space-based, airborne, ground-based, and underground-based. Historically, ground-based observatories were as simple as containing a mural instrument (for measuring the angle between stars) or Stonehenge (which has some alignments on astronomical phenomena).
Ground-based observatories
Ground-based observatories, located on the surface of Earth, are used to make observations in the radio and visible light portions of the electromagnetic spectrum. Most optical telescopes are housed within a dome or similar structure, to protect the delicate instruments from the elements. Telescope domes have a slit or other opening in the roof that can be opened during observing, and closed when the telescope is not in use. In most cases, the entire upper portion of the telescope dome can be rotated to allow the instrument to observe different sections of the night sky. Radio telescopes usually do not have domes.
For optical telescopes, most ground-based observatories are located far from major centers of population, to avoid the effects of light pollution. The ideal locations for modern observatories are sites that have dark skies, a large percentage of clear nights per year, dry air, and are at high elevations. At high elevations, the Earth's atmosphere is thinner, thereby minimizing the effects of atmospheric turbulence and resulting in better astronomical "seeing". Sites that meet the above criteria for modern observatories include the southwestern United States, Hawaii, Canary Islands, the Andes, and high mountains in Mexico such as Sierra Negra. Major optical observatories include Mauna Kea Observatory and Kitt Peak National Observatory in the US, Roque de los Muchachos Observatory in Spain, and Paranal Observatory and Cerro Tololo Inter-American Observatory in Chile.
Specific research study performed in 2009 shows that the best possible location for ground-based observatory on Earth is Ridge A — a place in the central part of Eastern Antarctica. This location provides the least atmospheric disturbances and best visibility.
Solar observatories
Radio observatories
Beginning in 1933, radio telescopes have been built for use in the field of radio astronomy to observe the Universe in the radio portion of the electromagnetic spectrum. Such an instrument, or collection of instruments, with supporting facilities such as control centres, visitor housing, data reduction centers, and/or maintenance facilities are called radio observatories. Radio observatories are similarly located far from major population centers to avoid electromagnetic interference (EMI) from radio, TV, radar, and other EMI emitting devices, but unlike optical observatories, radio observatories can be placed in valleys for further EMI shielding. Some of the world's major radio observatories include the Very Large Array in New Mexico, United States, Jodrell Bank in the UK, Arecibo in Puerto Rico, Parkes in New South Wales, Australia, and Chajnantor in Chile. A related discipline is Very-long-baseline interferometry (VLBI).
Highest astronomical observatories
Since the mid-20th century, a number of astronomical observatories have been constructed at very high altitudes, above . The largest and most notable of these is the Mauna Kea Observatory, located near the summit of a volcano in Hawaiʻi. The Chacaltaya Astrophysical Observatory in Bolivia, at , was the world's highest permanent astronomical observatory from the time of its construction during the 1940s until 2009. It has now been surpassed by the new University of Tokyo Atacama Observatory, an optical-infrared telescope on a remote mountaintop in the Atacama Desert of Chile.
Oldest astronomical observatories
The oldest proto-observatories, in the sense of an observation post for astronomy,
Wurdi Youang, Australia
Zorats Karer, Karahunj, Armenia
Loughcrew, Ireland
Newgrange, Ireland
Stonehenge, Great Britain
Chankillo, Peru
El Caracol, Mexico
Buto, Egypt
Abu Simbel, Egypt
Kokino, Kumanovo, North Macedonia
Observatory at Rhodes, Greece
Goseck circle, Germany
Ujjain, India
Arkaim, Russia
Cheomseongdae, South Korea
Angkor Wat, Cambodia
The oldest true observatories, in the sense of a specialized research institute, include:
825: Al-Shammisiyyah Observatory, Baghdad, Iraq
869: Mahodayapuram Observatory, Kerala, India
1259: Maragheh Observatory, Azerbaijan, Iran
1276: Gaocheng Astronomical Observatory, China
1420: Ulugh Beg Observatory, Samarqand, Uzbekistan
1442: Beijing Ancient Observatory, China
1577: Constantinople Observatory of Taqi ad-Din, Turkey
1580: Uraniborg, Denmark
1581: Stjerneborg, Denmark
1633: Leiden Observatory, Netherlands
1642: Panzano Observatory, Italy
1642: Round Tower, Denmark
1667: Paris Observatory, France
1675: Royal Greenwich Observatory, England
1695: Sukharev Tower, Russia
1711: Berlin Observatory, Germany
1724: Jantar Mantar, India
1753: Stockholm Observatory, Sweden
1753: Vilnius University Observatory, Lithuania
1753: Real Instituto y Observatorio de la Armada, Spain
1759: Trieste Observatory, Italy.
1757: Macfarlane Observatory, Scotland.
1759: Turin Observatory, Italy.
1764: Brera Astronomical Observatory, Italy.
1765: Mohr Observatory, Indonesia.
1771: Lviv Observatory, Ukraine.
1774: Observatory of the Vatican, Italy.
1785: Dunsink Observatory, Ireland.
1786: Madras Observatory, India.
1789: Armagh Observatory, Northern Ireland.
1790: Royal Observatory of Madrid, Spain,
1803: National Astronomical Observatory, Bogotá, Colombia.
1811: Tartu Old Observatory, Estonia
1812: Astronomical Observatory of Capodimonte, Naples, Italy
1830/1842: Depot of Charts & Instruments/US Naval Observatory, US
1830: Yale University Observatory Atheneum, US
1834: Helsinki University Observatory, Finland
1838: Hopkins Observatory, Williams College, US
1838: Loomis Observatory, Western Reserve Academy, US
1839: Pulkovo Observatory, Russia
1842: Cincinnati Observatory, US
1844: Georgetown University Astronomical Observatory, US
1847: Harvard College Observatory, US
1854: Detroit Observatory, US
1873: Quito Astronomical Observatory, Ecuador
1878: Lisbon Astronomical Observatory, Portugal
1884: McCormick Observatory, US
1888: Lick Observatory, US
1890: Smithsonian Astrophysical Observatory, US
1894: Lowell Observatory, US
1895: Theodor Jacobsen Observatory, US
1897: Yerkes Observatory, US
1899: Kodaikanal Solar Observatory, India
Space-based observatories
Space-based observatories are telescopes or other instruments that are located in outer space, many in orbit around the Earth. Space telescopes can be used to observe astronomical objects at wavelengths of the electromagnetic spectrum that cannot penetrate the Earth's atmosphere and are thus impossible to observe using ground-based telescopes. The Earth's atmosphere is opaque to ultraviolet radiation, X-rays, and gamma rays and is partially opaque to infrared radiation so observations in these portions of the electromagnetic spectrum are best carried out from a location above the atmosphere of our planet. Another advantage of space-based telescopes is that, because of their location above the Earth's atmosphere, their images are free from the effects of atmospheric turbulence that plague ground-based observations. As a result, the angular resolution of space telescopes such as the Hubble Space Telescope is often much smaller than a ground-based telescope with a similar aperture. However, all these advantages do come with a price. Space telescopes are much more expensive to build than ground-based telescopes. Due to their location, space telescopes are also extremely difficult to maintain. The Hubble Space Telescope was able to be serviced by the Space Shuttles while many other space telescopes cannot be serviced at all.
Airborne observatories
Airborne observatories have the advantage of height over ground installations, putting them above most of the Earth's atmosphere. They also have an advantage over space telescopes: The instruments can be deployed, repaired and updated much more quickly and inexpensively. The Kuiper Airborne Observatory and the Stratospheric Observatory for Infrared Astronomy use airplanes to observe in the infrared, which is absorbed by water vapor in the atmosphere. High-altitude balloons for X-ray astronomy have been used in a variety of countries.
Neutrino observatories
Example underground, underwater or under ice neutrino observatories include:
1998–2003 Gallium Neutrino Observatory
1999–2006 Sudbury Neutrino Observatory
2003 Baikal Deep Underwater Neutrino Telescope
2010 IceCube Neutrino Observatory
2012 Helium and Lead Observatory (HALO)
Meteorological observatories
Example meteorological observatories include:
1762 Kremsmünster Observatory, Austria
1781 Hohenpeißenberg Meteorological Observatory, Germany
1841 Colaba Observatory, India
1868 Kandilli Observatory, Türkiye
1869 New York Meteorological Observatory in Central Park, New York
1883 Hong Kong Observatory, Hong Kong
1885 Blue Hill Meteorological Observatory, Massachusetts
1932 Mount Washington Observatory, New Hampshire
1956 Mauna Loa Observatory, Hawaii
| Physical sciences | Observational astronomy | null |
58991 | https://en.wikipedia.org/wiki/Fog | Fog | Fog is a visible aerosol consisting of tiny water droplets or ice crystals suspended in the air at or near the Earth's surface. Fog can be considered a type of low-lying cloud usually resembling stratus and is heavily influenced by nearby bodies of water, topography, and wind conditions. In turn, fog affects many human activities, such as shipping, travel, and warfare.
Fog appears when water vapor (water in its gaseous form) condenses. During condensation, molecules of water vapor combine to make tiny water droplets that hang in the air. Sea fog, which shows up near bodies of saline water, is formed as water vapor condenses on bits of salt. Fog is similar to, but less transparent than, mist.
Definition
The term fog is typically distinguished from the more generic term cloud in that fog is low-lying, and the moisture in the fog is often generated locally (such as from a nearby body of water, like a lake or ocean, or from nearby moist ground or marshes). By definition, fog reduces visibility to less than , whereas mist causes lesser impairment of visibility. For aviation purposes in the United Kingdom, a visibility of less than but greater than is considered to be mist if the relative humidity is 95% or greater; below 95%, haze is reported.
Formation
Fog forms when the difference between air temperature and dew point is less than . Fog begins to form when water vapor condenses into tiny water droplets that are suspended in the air. Some examples of ways that water vapor is condensed include wind convergence into areas of upward motion; precipitation or virga falling from above; daytime heating evaporating water from the surface of oceans, water bodies, or wet land; transpiration from plants; cool or dry air moving over warmer water; and lifting air over mountains. Water vapor normally begins to condense on condensation nuclei such as dust, ice, and salt in order to form clouds. Fog, like its elevated cousin stratus, is a stable cloud deck which tends to form when a cool, stable air mass is trapped underneath a warm air mass.
Fog normally occurs at a relative humidity near 100%. This occurs from either added moisture in the air, or falling ambient air temperature. However, fog can form at lower humidities and can sometimes fail to form with relative humidity at 100%. At 100% relative humidity, the air cannot hold additional moisture, thus the air will become supersaturated if additional moisture is added.
Fog commonly produces precipitation in the form of drizzle or very light snow. Drizzle occurs when the humidity attains 100% and the minute cloud droplets begin to coalesce into larger droplets. This can occur when the fog layer is lifted and cooled sufficiently, or when it is forcibly compressed from above by descending air. Drizzle becomes freezing drizzle when the temperature at the surface drops below the freezing point.
The thickness of a fog layer is largely determined by the altitude of the inversion boundary, which in coastal or oceanic locales is also the top of the marine layer, above which the air mass is warmer and drier. The inversion boundary varies its altitude primarily in response to the weight of the air above it, which is measured in terms of atmospheric pressure. The marine layer, and any fog-bank it may contain, will be "squashed" when the pressure is high and conversely may expand upwards when the pressure above it is lowering.
Types
Fog can form multiple ways, depending on how the cooling occurred that caused the condensation.
Radiation fog is formed by the cooling of land after sunset by infrared thermal radiation in calm conditions with a clear sky. The cooling ground then cools adjacent air by conduction, causing the air temperature to fall and reach the dew point, forming fog. In perfect calm, the fog layer can be less than a meter thick, but turbulence can promote a thicker layer. Radiation fog occurs at night and usually does not last long after sunrise, but it can persist all day in the winter months especially in areas bounded by high ground. Radiation fog is most common in autumn and early winter. Examples of this phenomenon include tule fog.
Ground fog is fog that obscures less than 60% of the sky and does not extend to the base of any overhead clouds. However, the term is usually a synonym for shallow radiation fog; in some cases the depth of the fog is on the order of tens of centimetres over certain kinds of terrain with the absence of wind.
Advection fog occurs when moist air passes over a cool surface by advection (wind) and is cooled. It is common as a warm front passes over an area with significant snow-pack. It is most common at sea when moist air encounters cooler waters, including areas of cold water upwelling, such as along the California coast. A strong enough temperature difference over water or bare ground can also cause advection fog.
Although strong winds often mix the air and can disperse, fragment, or prevent many kinds of fog, markedly warmer and humid air blowing over a snowpack can continue to generate advection fog at elevated velocities up to or more – this fog will be in a turbulent, rapidly moving, and comparatively shallow layer, observed as a few centimetres/inches in depth over flat farm fields, flat urban terrain and the like, and/or form more complex forms where the terrain is different such as rotating areas in the lee of hills or large buildings and so on.
Fog formed by advection along the California coastline is propelled onto land by one of several processes. A cold front can push the marine layer coast-ward, an occurrence most typical in the spring or late fall. During the summer months, a low-pressure trough produced by intense heating inland creates a strong pressure gradient, drawing in the dense marine layer. Also, during the summer, strong high pressure aloft over the desert southwest, usually in connection with the summer monsoon, produces a south to southeasterly flow which can drive the offshore marine layer up the coastline; a phenomenon known as a "southerly surge", typically following a coastal heat spell. However, if the monsoonal flow is sufficiently turbulent, it might instead break up the marine layer and any fog it may contain. Moderate turbulence will typically transform a fog bank, lifting it and breaking it up into shallow convective clouds called stratocumulus.
Frontal fog forms in much the same way as stratus cloud near a front when raindrops, falling from relatively warm air above a frontal surface, evaporate into cooler air close to the Earth's surface and cause it to become saturated. The water vapor cools and at the dewpoint it condenses and fog forms. This type of fog can be the result of a very low frontal stratus cloud subsiding to surface level in the absence of any lifting agent after the front passes.
Hail fog sometimes occurs in the vicinity of significant hail accumulations due to decreased temperature and increased moisture leading to saturation in a very shallow layer near the surface. It most often occurs when there is a warm, humid layer atop the hail and when wind is light. This ground fog tends to be localized but can be extremely dense and abrupt. It may form shortly after the hail falls; when the hail has had time to cool the air and as it absorbs heat when melting and evaporating.
Freezing conditions
occurs when liquid fog droplets freeze to surfaces, forming white soft or hard rime ice. This is very common on mountain tops which are exposed to low clouds. It is equivalent to freezing rain and essentially the same as the ice that forms inside a freezer which is not of the "frostless" or "frost-free" type. The term "freezing fog" may also refer to fog where water vapor is super-cooled, filling the air with small ice crystals similar to very light snow. It seems to make the fog "tangible", as if one could "grab a handful".
In the western United States, freezing fog may be referred to as pogonip. It occurs commonly during cold winter spells, usually in deep mountain valleys. The word pogonip is derived from the Shoshone word paγi̵nappi̵h, which means "cloud".
In The Old Farmer's Almanac, in the calendar for December, the phrase "Beware the Pogonip" regularly appears. In his anthology Smoke Bellew, Jack London describes a pogonip which surrounded the main characters, killing one of them.
The phenomenon is common in the inland areas of the Pacific Northwest, with temperatures in the range. The Columbia Plateau experiences this phenomenon most years during temperature inversions, sometimes lasting for as long as three weeks. The fog typically begins forming around the area of the Columbia River and expands, sometimes covering the land to distances as far away as La Pine, Oregon, almost due south of the river and into south central Washington.
Frozen fog (also known as ice fog) is any kind of fog where the droplets have frozen into extremely tiny crystals of ice in midair. Generally, this requires temperatures at or below , making it common only in and near the Arctic and Antarctic regions. It is most often seen in urban areas where it is created by the freezing of water vapor present in automobile exhaust and combustion products from heating and power generation. Urban ice fog can become extremely dense and will persist day and night until the temperature rises. It can be associated with the diamond dust form of precipitation, in which very small crystals of ice form and slowly fall. This often occurs during blue sky conditions, which can cause many types of halos and other results of refraction of sunlight by the airborne crystals. Ice fog often leads to the visual phenomenon of light pillars.
Topographical influences
Up-slope fog or hill fog forms when winds blow air up a slope (called orographic lift), adiabatically cooling it as it rises and causing the moisture in it to condense. This often causes freezing fog on mountaintops, where the cloud ceiling would not otherwise be low enough.
Valley fog forms in mountain valleys, often during winter. It is essentially a radiation fog confined by local topography and can last for several days in calm conditions. In California's Central Valley, valley fog is often referred to as tule fog.
Sea and coastal areas
Sea fog (also known as haar or fret) is heavily influenced by the presence of sea spray and microscopic airborne salt crystals. Clouds of all types require minute hygroscopic particles upon which water vapor can condense. Over the ocean surface, the most common particles are salt from salt spray produced by breaking waves. Except in areas of storminess, the most common areas of breaking waves are located near coastlines, hence the greatest densities of airborne salt particles are there.
Condensation on salt particles has been observed to occur at humidities as low as 70%, thus fog can occur even in relatively dry air in suitable locations such as the California coast. Typically, such lower humidity fog is preceded by a transparent mistiness along the coastline as condensation competes with evaporation, a phenomenon that is typically noticeable by beachgoers in the afternoon. Another recently discovered source of condensation nuclei for coastal fog is kelp seaweed. Researchers have found that under stress (intense sunlight, strong evaporation, etc.), kelp releases particles of iodine which in turn become nuclei for condensation of water vapor, causing fog that diffuses direct sunlight.
Sea smoke, also called steam fog or evaporation fog, is created by cold air passing over warmer water or moist land. It may cause freezing fog or sometimes hoar frost. This situation can also lead to the formation of steam devils, which look like their dust counterparts. Lake-effect fog is of this type, sometimes in combination with other causes like radiation fog. It tends to differ from most advective fog formed over land in that it is (like lake-effect snow) a convective phenomenon, resulting in fog that can be very dense and deep and looks fluffy from above. Arctic sea smoke is similar to sea smoke but occurs when the air is very cold. Instead of condensing into water droplets, columns of freezing, rising, and condensing water vapor is formed. The water vapor produces the sea smoke fog and is usually misty and smoke-like.
Garúa fog near the coast of Chile and Peru occurs when typical fog produced by the sea travels inland but suddenly meets an area of hot air. This causes the water particles of fog to shrink by evaporation, producing a "transparent mist". Garua fog is nearly invisible, yet it still forces drivers to use windshield wipers because of condensation onto cooler hard surfaces. Camanchaca is a similar dense fog.
Visibility effects
Depending on the concentration of the droplets, visibility in fog can range from the appearance of haze to almost zero visibility. Many lives are lost each year worldwide from accidents involving fog conditions on the highways, including multiple-vehicle collisions.
The aviation travel industry is affected by the severity of fog conditions. Even though modern auto-landing computers can put an aircraft down without the aid of a pilot, personnel manning an airport control tower must be able to see if aircraft are sitting on the runway awaiting takeoff. Safe operations are difficult in thick fog, and civilian airports may forbid takeoffs and landings until conditions improve.
A solution for landing returning military aircraft developed in World War II was called Fog Investigation and Dispersal Operation (FIDO). It involved burning enormous amounts of fuel alongside runways to evaporate fog, allowing returning fighter and bomber pilots sufficient visual cues to safely land their aircraft. The high energy demands of this method discourage its use for routine operations.
Shadows
Shadows are cast through fog in three dimensions. The fog is dense enough to be illuminated by light that passes through gaps in a structure or tree, but thin enough to let a large quantity of that light pass through to illuminate points further on. As a result, object shadows appear as "beams" oriented in a direction parallel to the light source. These voluminous shadows are created the same way as crepuscular rays, which are the shadows of clouds. In fog, it is solid objects that cast shadows.
Sound propagation and acoustic effects
Sound typically travels fastest and farthest through solids, then liquids, then gases such as the atmosphere. Sound is affected during fog conditions due to the small distances between water droplets, and air temperature differences.
Though fog is essentially liquid water, the many droplets are separated by small air gaps. High-pitched sounds have a high frequency, which in turn means they have a short wavelength. To transmit a high frequency wave, air must move back and forth very quickly. Short-wavelength high-pitched sound waves are reflected and refracted by many separated water droplets, partially cancelling and dissipating their energy (a process called "damping"). In contrast, low pitched notes, with a low frequency and a long wavelength, move the air less rapidly and less often, and lose less energy to interactions with small water droplets. Low-pitched notes are less affected by fog and travel further, which is why foghorns use a low-pitched tone.
A fog can be caused by a temperature inversion where cold air is pooled at the surface which helped to create the fog, while warmer air sits above it. The inverted boundary between cold air and warm air reflects sound waves back toward the ground, allowing sound that would normally radiate out escaping into the upper atmosphere to instead bounce back and travel near the surface. A temperature inversion increases the distance that lower frequency sounds can travel, by reflecting the sound between the ground and the inversion layer.
Record extremes
Particularly foggy places include Hamilton, New Zealand and Grand Banks off the coast of Newfoundland (the meeting place of the cold Labrador Current from the north and the much warmer Gulf Stream from the south). Some very foggy land areas in the world include Argentia (Newfoundland) and Point Reyes (California), each with over 200 foggy days per year. Even in generally warmer southern Europe, thick fog and localized fog are often found in lowlands and valleys, such as the lower part of the Po Valley and the Arno and Tiber valleys in Italy; Ebro Valley in northeastern Spain; as well as on the Swiss plateau, especially in the Seeland area, in late autumn and winter. Other notably foggy areas include coastal Chile (in the south); coastal Namibia; Nord, Greenland; and the Severnaya Zemlya islands.
As a water source
Redwood forests in California receive approximately 30–40% of their moisture from coastal fog by way of fog drip. Change in climate patterns could result in relative drought in these areas. Some animals, including insects, depend on wet fog as a principal source of water, particularly in otherwise desert climes, as along many African coastal areas. Some coastal communities use fog nets to extract moisture from the atmosphere where groundwater pumping and rainwater collection are insufficient. Fog can be of different type according to climatic conditions.
Artificial fog
Artificial fog is man-made fog that is usually created by vaporizing a water- and glycol- or glycerine-based fluid. The fluid is injected into a heated metal block which evaporates quickly. The resulting pressure forces the vapor out of a vent. Upon coming into contact with cool outside air, the vapor condenses in microscopic droplets and appears as fog. Such fog machines are primarily used for entertainment applications.
Historical references
The presence of fog has often played a key role in historical events, such as strategic battles. One example is the 1776 Battle of Long Island when American General George Washington and his command were able to evade imminent capture by the British Army, using fog to conceal their escape. Another example is D-Day (6 June 1944) during World War II, when the Allies landed on the beaches of Normandy, France during fog conditions. Both positive and negative results were reported from both sides during that battle, due to impaired visibility.
| Physical sciences | Clouds | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.