id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
13,298,565
https://en.wikipedia.org/wiki/Skid-to-turn
Skid-to-turn is an aeronautical vehicle reference for how such a vehicle may be turned. It applies to vehicles such as aircraft and missiles. In skid-to-turn, the vehicle does not roll to a preferred angle. Instead commands to the control surfaces are mixed to produce the maneuver in the desired direction. This is distinct from the coordinated turn used by aircraft pilots. For instance, a vehicle flying horizontally may be turned in the horizontal plane by the application of rudder controls to place the body at a sideslip angle relative to the airflow. This sideslip flow then produces a force in the horizontal plane to turn the vehicle's velocity vector. The benefit of the skid-to-turn maneuver is that it can be performed much quicker than a coordinated turn. This is useful when trying to correct for small errors. The disadvantage occurs if the vehicle has greater maneuverability in one body plane than another. In that case the turns are less efficient and either consume greater thrust or cause a greater loss of aircraft specific energy than coordinated turns. See also Skid steer References External links Automatic control of aircraft and missiles By John H. Blakelock Aerodynamics
Skid-to-turn
[ "Chemistry", "Engineering" ]
235
[ "Aerospace engineering", "Aerodynamics", "Fluid dynamics" ]
13,298,714
https://en.wikipedia.org/wiki/Sea%20Quest%20%28drilling%20rig%29
The Sea Quest was a semi-submersible drilling rig. She discovered the UK's first North Sea oil on 14 September 1969 in the Arbroath Field. She also discovered the first giant oil field named Forties on 7 October 1970. The Sea Quest was built by Belfast shipbuilders Harland and Wolff for BP at a cost of £3.5 million and launched on 8 January 1966. The entire structure was high and weighed 150,000 tons, including three legs each in diameter and long that could be partially filled with water to control the height of the platform above the sea. In 1977, Sea Quest was sold to Sedco (now part of Transocean) and renamed Sedco 135C. She was towed to the west coast of Africa. On 17 January 1980, while drilling in the Warri area, Nigeria, a blowout occurred and the rig sustained extensive fire damage. The rig was then deliberately sunk in deep water. References Collapsed oil platforms Semi-submersibles Drilling rigs Ships of BP Transocean Ships built by Harland and Wolff Ships built in Belfast
Sea Quest (drilling rig)
[ "Chemistry" ]
226
[ "Petroleum", "Petroleum stubs" ]
13,298,795
https://en.wikipedia.org/wiki/Isomalathion
Isomalathion is an impurity found in some batches of malathion. Whereas the structure of malation is, generically, RSP(S)(OCH3)2, the connectivity of isomalathion is RSPO(SCH3)(OCH3). It arises by heating malathion. Being significantly more toxic to humans than malathion, it has resulted in human poisonings. In 1976, numerous malaria workers in Pakistan were poisoned by isomalathion. It is an inhibitor of carboxyesterase. References Phosphorodithioates Succinate esters
Isomalathion
[ "Chemistry", "Biology" ]
130
[ "Biotechnology stubs", "Functional groups", "Phosphorodithioates", "Biochemistry stubs", "Biochemistry" ]
13,298,962
https://en.wikipedia.org/wiki/Framework-oriented%20design
Framework Oriented Design (FOD) is a programming paradigm that uses existing frameworks as the basis for an application design. The framework can be thought of as fully functioning template application. The application development consists of modifying callback procedure behaviour and modifying object behaviour using inheritance. This paradigm provides the patterns for understanding development with Rapid Application Development (RAD) systems such as Delphi, where the Integrated Development Environment (IDE) provides the template application and the programmer fills in the appropriate event handlers. The developer has the option of modifying existing objects via inheritance. References C++ Hierarchy Design Idioms by Stephen C. Dewhurst of www.semantics.org. Software design
Framework-oriented design
[ "Engineering" ]
136
[ "Design", "Software design" ]
13,299,905
https://en.wikipedia.org/wiki/Pyroglutamyl-histidyl-glycine
Pyroglutamyl-histidyl-glycine (pEHG) is an endogenous tripeptide that acts as a tissue-specific antimitotic and selectively inhibits the proliferation of colon epithelial cells. Early research indicated that pEHG had anorectic effects in mice and was possibly involved in the pathophysiology of anorexia nervosa. However, subsequent studies have found that pEHG lacks anorectic effects and does not alter food intake in mice. References Tripeptides
Pyroglutamyl-histidyl-glycine
[ "Chemistry", "Biology" ]
113
[ "Biochemistry stubs", "Biotechnology stubs", "Biochemistry" ]
13,299,953
https://en.wikipedia.org/wiki/FG-7142
FG-7142 (ZK-31906) is a drug which acts as a partial inverse agonist at the benzodiazepine allosteric site of the GABAA receptor. It has anorectic, anxiogenic and pro-convulsant effects. It also increases release of acetylcholine and noradrenaline, and improves memory retention in animal studies. References Anxiogenics Beta-Carbolines Carboxamides Convulsants GABAA receptor negative allosteric modulators Nootropics
FG-7142
[ "Chemistry", "Biology" ]
119
[ "Biochemistry stubs", "Biotechnology stubs", "Biochemistry" ]
13,300,087
https://en.wikipedia.org/wiki/John%20Deere%20snowmobiles
John Deere was the trade name of snowmobiles designed and built by John Deere from 1972 to 1984. The initial design and testing phase came in 1970–1971, when engineers tested other popular snowmobiles, and found ways to improve them. The machines were produced by the John Deere Horicon Works of Horicon, Wisconsin along with lawn and garden products. Lawn and garden equipment is still manufactured there. John Deere also had its own range of snowmobile suits. Marketing The slogan "Nothing Runs Like a Deere", still used today by Deere & Co., started with the John Deere snowmobile line in 1972. From 1978 to 1980, JD used the slogan "Big John - Little John." In 1980, another new slogan was introduced: "Ride the new breed of Deere". In 1980, John Deere was the official supplier of snowmobiles for the Winter Olympic Games in Lake Placid, New York. Market exit In 1982–1984, the snowmobile market was in a downward slide, and the driving force behind the snowmobile program, executive vice president Robert Carlson, had left the company. This made ending the snowmobile program an easy decision for Deere. The parts supply and all snowmobile-related resources were sold to Polaris. There was an understanding that Polaris would continue where Deere left off, selling snowmobiles and parts to the Deere dealers that were interested. This never worked out. Recently a prototype Liquifire was uncovered in a Polaris warehouse which would have been one of the first snowmobiles to feature independent front suspension. The Snowfire was the last production snowmobile on the market to have a free-air engine, and the last snowmobile in production for John Deere. Enduro Team Deere In 1974, a factory sponsored cross-country race team was assembled to go along with the introduction of the 295/S, Deere's first purpose-built snowmobile for cross-country racing. The team would eventually be known as "Enduro Team Deere". The team had many wins, the most notable being the 1976 Minneapolis - St. Paul International 500. Brian Nelson brought home the trophy on his Liquidator. His sled is currently on display at the Snowmobile Hall of Fame and Museum in St. Germain, Wisconsin. 1977 was the last year for the factory program. Instead, Deere offered support and incentives for independent racers. Models A total of twenty-one models were produced: Kioritz made engines for CCW, so they are the same. Kawasaki produced the John Deere-designed Fireburst engines. Comet first started making snowmobile clutches for John Deere. The 94C Duster clutch and the 102C clutch were developed exclusively for John Deere. References jdsleds.com SnowmobileData.com External links official web site Snowmobile manufacturers John Deere vehicles
John Deere snowmobiles
[ "Engineering" ]
595
[ "Engineering vehicles", "John Deere vehicles" ]
13,300,194
https://en.wikipedia.org/wiki/Advanced%20Traffic%20Management%20System
The advanced traffic management system (ATMS) field is a primary subfield within the intelligent transportation system (ITS) domain, and is used in the United States. The ATMS view is a top-down management perspective that integrates technology primarily to improve the flow of vehicle traffic and improve safety. Real-time traffic data from cameras, speed sensors, etc. flows into a transportation management center (TMC) where it is integrated and processed (e.g. for incident detection), and may result in actions taken (e.g. traffic routing, DMS messages) with the goal of improving traffic flow. The National ITS Architecture defines the following primary goals and metrics for ITS: Increase transportation system efficiency Enhance mobility Improve safety Reduce fuel consumption and environmental cost Increase economic productivity Create an environment for an ITS market History In 1956, the National Interstate and Defense Highways Act initiated a 35-year $114 billion program that designed and constructed the interstate highway system. This hugely successful program was mostly complete by 1991, and the era of build-out was over. In the mid to late 1980s transportation officials from federal and state governments, the private sector, and universities began a series of informal meetings discussing the future of transportation. This included meetings held by the California Department of Transportation (Caltrans) in October 1986 to discuss technology applied to future advanced highways. In June 1988 in Washington, DC, the group formalized its structure and chose the name "Mobility 2000". In 1990, Mobility 2000 morphed into ITS America, the main ITS advocacy and policy group in the US. The initial name of ITS America was IVHS America and was changed in 1994 to reflect a broader intermodal perspective. The 1991 Intermodal Surface Transportation Efficiency Act (ISTEA) was the first post-build-out transportation act. It initiated a new approach focused on efficiency, intelligence, and intermodalism. It had a primary goal of providing "the foundation for the nation to compete in the global economy". This new mixture of infrastructure and technology was identified as an intelligent transportation system (ITS) and was the centerpiece of the 1991 ISTEA act. ITS is loosely defined as "the application of computers, communications, and sensor technology to surface transportation". Subsequent surface transportation bills have continued ITS funding and development. In 2005 the SAFETEA-LU (Safe, Accountable, Flexible, Efficient Transportation Equity Act: A Legacy for Users) surface transportation spending bill was signed into law. Functional areas Real-time traffic monitoring Dynamic message sign monitoring and control Incident monitoring Traffic camera monitoring and control Active traffic management (ATM) Chain control Ramp meter monitoring and control Arterial management Traffic signal monitoring and control Automated warning systems Road Weather Information System (RWIS) monitoring Highway advisory radio Urban traffic management and control Systems IRIS open-source ATMS Project Georgia Navigator Kimley-Horn integrated transportation system (KITS) See also Traffic optimization Speed limit: variable speed limits Variable-message sign PTV Group References Transportation engineering Intelligent transportation systems Road traffic management
Advanced Traffic Management System
[ "Technology", "Engineering" ]
607
[ "Transport systems", "Industrial engineering", "Information systems", "Transportation engineering", "Civil engineering", "Warning systems", "Intelligent transportation systems" ]
13,300,417
https://en.wikipedia.org/wiki/Theodrenaline
Theodrenaline (), also known as noradrenalinoethyltheophylline or as noradrenaline theophylline, is a chemical linkage of norepinephrine (noradrenaline) and theophylline used as a cardiac stimulant. It is sometimes combined with cafedrine. See also Cafedrine Fenethylline Theophylline ephedrine References Adenosine receptor antagonists Antihypotensive agents Catecholamines Cardiac stimulants Phenylethanolamines Xanthines
Theodrenaline
[ "Chemistry" ]
123
[ "Alkaloids by chemical classification", "Xanthines" ]
13,300,442
https://en.wikipedia.org/wiki/Cafedrine
Cafedrine (, ), sold under the brand name Akrinor among others, is a chemical linkage of norephedrine and theophylline and is a cardiac stimulant and antihypotensive agent used to increase blood pressure in people with hypotension. It has been marketed in Europe, South Africa, and Indonesia. There has been concern about cafedrine as a potential performance-enhancing drug and doping agent in sports. See also Fenethylline Theodrenaline Theophylline ephedrine References Adenosine receptor antagonists Antihypotensive agents Beta-Hydroxyamphetamines Cardiac stimulants Norepinephrine releasing agents Sympathomimetics Xanthines
Cafedrine
[ "Chemistry" ]
158
[ "Alkaloids by chemical classification", "Xanthines" ]
13,300,452
https://en.wikipedia.org/wiki/Gepefrine
Gepefrine, also known as 3-hydroxyamphetamine or α-methyl-meta-tyramine and sold under the brand names Pressionorm and Wintonin, is a sympathomimetic medication used as an antihypotensive agent which has been marketed in Germany. Pharmacology Gepefrine is described as a sympathomimetic and antihypotensive agent. Chemistry Gepefrine, also known as 3-hydroxy-α-methylphenethylamine or as 3-hydroxyamphetamine, is a substituted phenethylamine and amphetamine derivative. It is used pharmaceutically as the (S)-enantiomer and as the tartrate salt. Related compounds include meta-tyramine (3-hydroxyphenethylamine), 4-hydroxyamphetamine (norpholedrine), 3,4-dihydroxyamphetamine (α-methyldopamine), and metaraminol ((1R,2S)-3,β-dihydroxyamphetamine), among others. History Gepefrine was synthesized by 1968 and was introduced for medical use in Germany by 1981. Society and culture Names Gepefrine is the generic name of the drug and its . Brand names of gepefrine include Pressionorm and Wintonin. Other drugs Gepefrine is a known metabolite of amphetamine in rats. References Abandoned drugs Antihypotensive agents 3-Hydroxyphenyl compounds Norepinephrine-dopamine releasing agents Substituted amphetamines Sympathomimetics Vasoconstrictors
Gepefrine
[ "Chemistry" ]
366
[ "Drug safety", "Abandoned drugs" ]
13,300,460
https://en.wikipedia.org/wiki/Prenalterol
Prenalterol, sold under the brand name Hyprenan, is a sympathomimetic agent and cardiac stimulant which acts as a β1-adrenergic receptor partial agonist and is used in the treatment of heart failure. It has selectivity for the β1-adrenergic receptor. Its partial agonist activity or intrinsic sympathomimetic activity is about 60%. It is said to have much greater impact on myocardial contractility than on heart rate. The drug has been marketed in Denmark, Norway, and Sweden. Chemistry Synthesis Stereospecific Prenalterol exhibits adrenergic agonist activity in spite of an interposed oxymethylene group. The stereospecific synthesis devised for this molecule relies on the fact that the side chain is very similar in oxidation state to that of a sugar. Condensation of monobenzone (2) with the epoxide derived from α-D-glucofuranose affords the glycosylated derivative (3). Hydrolytic removal of the acetonide protecting groups followed by cleavage of the sugar with periodate gives aldehyde (4). This is reduced to the glycol by means of NaBH4 and the terminal alcohol is converted to the mesylate (5). Displacement of the leaving group with isopropylamine followed by hydrogenolytic removal of the O-benzyl ether affords the β1-adrenergic selective adrenergic agonist prenalterol (6). Racemic Several preparations of the racemic mixture have been reported. See also Alifedrine Xamoterol References Further reading Amines Beta1-adrenergic agonists Cardiac stimulants Inotropic agents Phenol ethers 4-Hydroxyphenyl compounds Secondary alcohols Sympathomimetics
Prenalterol
[ "Chemistry" ]
393
[ "Amines", "Bases (chemistry)", "Functional groups" ]
13,300,471
https://en.wikipedia.org/wiki/Dimetofrine
Dimetofrine (), also known as dimethophrine or dimetophrine and sold under the brand names Dovida, Pressamina, and Superten, is a medication described as a sympathomimetic, vasoconstrictor, and cardiac stimulant. It is said to be similarly or less effective than midodrine in the treatment of orthostatic hypotension and shows substantially lower potency. The drug is a selective α1-adrenergic receptor agonist but is also said to have β-adrenergic receptor agonist activity. It is a substituted phenethylamine and is also known as 3,5-dimethoxy-4,β-dihydroxy-N-methylphenethylamine. Its chemical structure is similar to that of desglymidodrine (3,6-dimethoxy-β-hydroxyphenethylamine), the active metabolite of midodrine. Dimetofrine remained marketed only in Italy in 2000. References Abandoned drugs Alpha-1 adrenergic receptor agonists Antihypotensive agents Beta-adrenergic agonists Cardiac stimulants Methoxy compounds Phenols Phenylethanolamines Resorcinol ethers Sympathomimetics Vasoconstrictors
Dimetofrine
[ "Chemistry" ]
295
[ "Drug safety", "Abandoned drugs" ]
13,300,481
https://en.wikipedia.org/wiki/Mephentermine
Mephentermine, sold under the brand name Wyamine among others, is a sympathomimetic medication which was previously used in the treatment of low blood pressure but is mostly no longer marketed. It is used by injection into a vein or muscle, by inhalation, and by mouth. Side effects of mephentermine include dry mouth, sedation, reflex bradycardia, arrhythmias, and hypertension. Mephentermine induces the release of norepinephrine and dopamine and is described as an indirectly acting sympathomimetic and psychostimulant. Its sympathomimetic effects are mediated by indirect activation of α- and β-adrenergic receptors. Chemically, it is a substituted phenethylamine and amphetamine and is closely related to phentermine and methamphetamine. Mephentermine was first described and introduced for medical use by 1952. It was discontinued in the United States between 2000 and 2004. The medication appears to remain available only in India. Misuse of mephentermine for recreational and performance-enhancing purposes has been increasingly encountered in modern times, especially in India. Medical uses For maintenance of blood pressure in hypotensive states, the dose for adults is 30 to 45mg as a single dose, repeated as necessary or followed by intravenous infusion of 0.1% mephentermine in 5% dextrose, with the rate and duration of administration depending on the patient's response. For hypotension secondary to spinal anesthesia in obstetric patients, the dose for adults is 15mg as a single dose, repeated if needed. The maximum dose 30mg. Mephentermine has also been used as a decongestant. Available forms Mephentermine is available in the form of 15 and 30mg/mL solutions for intravenous infusion or intramuscular injection and in the form of 10mg oral tablets. It has also been available in the form of inhalers. Contraindications Low blood pressure caused by phenothiazines, hypertension, and pheochromocytoma. Patients receiving monoamine oxidase inhibitors. For shock due to loss of blood or fluid, give fluid replacement therapy primarily, cardiovascular disease, hypertension, hyperthyroidism, chronic illnesses, lactation, pregnancy, skin dryness. headache. Side effects The most common side effects of mephentermine are drowsiness, incoherence, hallucinations, convulsions, slow heart rate (reflex bradycardia). Fear, anxiety, restlessness, tremor, insomnia, confusion, irritability, and psychosis. Nausea, vomiting, reduced appetite, urinary retention, dyspnea, weakness, and neck pain. Potentially fatal reactions are due to atrioventricular block, central nervous system stimulation, cerebral hemorrhage, pulmonary edema, and ventricular arrhythmias. Interactions Mephentermine antagonizes effect of agents that lower blood pressure. Severe hypertension may occur with monoamine oxidase inhibitors and possibly tricyclic antidepressants. Additive vasoconstricting effects occur with ergot alkaloids, and oxytocin. Potentially fatal drug interactions are the risk of abnormal heart rhythm in people undergoing anesthesia with cyclopropane and halothane. Pharmacology Pharmacodynamics Mephentermine is thought to act as a releasing agent of norepinephrine and dopamine. It is described as an indirectly acting sympathomimetic, cardiac stimulant, adrenergic, vasoconstrictor, antihypotensive agent, and psychostimulant. Its sympathomimetic effects are mediated by indirect activation of α- and β-adrenergic receptors. Mephentermine appears to act by indirect stimulation of β-adrenergic receptors through causing the release of norepinephrine from its storage sites. It has a positive inotropic effect on the myocardium. AV conduction and refractory period of AV node is shortened with an increase in ventricular conduction velocity. It dilates arteries and arterioles in the skeletal muscle and mesenteric vascular beds, leading to an increase in venous return. Pharmacokinetics Its onset of action is 5 to 15minutes with intramuscular injection and is immediate with intravenous administration. Its duration of action is 4hours with intramuscular injection and 30minutes with intravenous administration. Mephentermine, along with phentermine, is known to be produced as a metabolite of the orally administered local anesthetic oxetacaine (oxethazaine). Chemistry Mephentermine, also known as N,α,α-trimethylphenethylamine or N,α-dimethylampetamine, is a phenethylamine and amphetamine derivative. It is the N-methylated analogue of phentermine (α-methylamphetamine) and is also known as N-methylphentermine. In addition, mephentermine is the α-methylated analogue of methamphetamine or the α,α-dimethylated derivative of amphetamine. Synthesis Mephentermine can by synthesized beginning with a Henry reaction between benzaldehyde (1) and 2-nitropropane (2) to give 2-methyl-2-nitro-1-phenylpropan-1-ol (3). The nitro group is reduced with zinc in sulfuric acid giving 2-phenyl-1,1-dimethylethanolamine (4). Imine formation by dehydration with benzaldehyde gives (5). Alkylation with iodomethane leads to (6). Halogenation with thionyl chloride gives (7). Lastly, a Rosenmund reduction completes the synthesis of mephentermine (8). Mephentermine can also be synthesized by condensation of phentermine with benzaldehyde to get a Schiff base which can be alkylated with methyl iodide to give mephentermine. History Mephentermine was first described in the literature and was introduced for medical use under the brand name Wyamine by 1952. It was discontinued in the United States between 2000 and 2004. Society and culture Names Mephentermine is the generic name of the drug and its , , and . In the case of the sulfate salt, its is mephentermine sulfate and its is mephentermine sulphate. Synonyms of mephentermine include mephetedrine and mephenterdrine. Brand names of mephentermine include Wyamine (), Fentermin (), and Mephentine (). Availability Mephentermine is no longer available in the United States and remains available in few or no other countries. However, it appears to remain available in India. It has also remained available in Brazil for use in veterinary medicine. Recreational use Misuse of mephentermine for recreational and/or performance-enhancing purposes has been reported along with addiction and dependence and serious health complications. It has been especially encountered in India, the only country in which mephentermine appears to remain available for medical use. Exercise and sports Mephentermine has been used as a performance-enhancing drug in exercise and sports. It is on the World Anti-Doping Agency (WADA) list of prohibited substances. Research Mephentermine was evaluated in the treatment of congestive heart failure in one small clinical study but was found to be ineffective. Veterinary use Mephentermine has been used in veterinary medicine in Brazil under the brand names Potenay and Potemax. References Abandoned drugs Antihypotensive agents Carbonic anhydrase activators Cardiac stimulants Decongestants Drugs acting on the cardiovascular system Drugs acting on the nervous system Euphoriants Human drug metabolites Methamphetamines Norepinephrine-dopamine releasing agents Phentermines Stimulants Sympathomimetics Vasoconstrictors World Anti-Doping Agency prohibited substances
Mephentermine
[ "Chemistry" ]
1,753
[ "Chemicals in medicine", "Drug safety", "Human drug metabolites", "Abandoned drugs" ]
13,300,522
https://en.wikipedia.org/wiki/Norfenefrine
Norfenefrine, also known as meta-octopamine or norphenylephrine and sold under the brand name Novadral among others, is a sympathomimetic medication which is used in the treatment of hypotension (low blood pressure). Along with its structural isomer p-octopamine and the tyramines, norfenefrine is a naturally occurring endogenous trace amine and plays a role as a minor neurotransmitter in the brain. Medical uses Norfenefrine is used in the treatment of hypotension (low blood pressure). It is said to be similarly effective or less effective than midodrine. Pharmacology Pharmacodynamics Norfenefrine is described as an α-adrenergic receptor agonist and sympathomimetic agent. It is said to act predominantly as an α1-adrenergic receptor agonist. Chemistry Norfenefrine, also known as 3,β-dihydroxyphenethylamine, is a substituted phenethylamine derivative. It is an analogue of norepinephrine (3,4,β-trihydroxyphenethylamine), of meta-tyramine (3-hydroxyphenethylamine), of phenylephrine ((R)-β,3-dihydroxy-N-methylphenethylamine), of etilefrine (3,β-dihydroxy-N-ethylphenethylamine), and of metaterol (3,β-dihydroxy-N-isopropylphenethylamine), as well as of metaraminol ((1R,2S)-3,β-dihydroxy-α-methylphenethylamine). Norfenefrine is used medically as the hydrochloride salt. The predicted log P of norfenefrine is -0.28 to -0.95. Society and culture Names Norfenefrine is the generic name of the drug and its . Synonyms of norfenefrine include hydroxyphenylethanolamine, nor-phenylephrine, and m-norsynephrine, among others. Brand names of norfenefrine include Novadral, A.S. COR, Coritat, Energona, Hypolind, Norfenefrin Ziethen, and Norfenefrin-Ratiopharm, among others. Availability Norfenefrine is marketed in Europe, Japan, and Mexico. References Alpha-adrenergic agonists Antihypotensive agents Cardiac stimulants Norepinephrine-dopamine releasing agents Neurotransmitters Peripherally selective drugs 3-Hydroxyphenyl compounds Phenylethanolamines Sympathomimetics TAAR1 agonists Trace amines
Norfenefrine
[ "Chemistry" ]
632
[ "Neurochemistry", "Neurotransmitters" ]
2,528
https://en.wikipedia.org/wiki/Adenylyl%20cyclase
Adenylate cyclase (EC 4.6.1.1, also commonly known as adenyl cyclase and adenylyl cyclase, abbreviated AC) is an enzyme with systematic name ATP diphosphate-lyase (cyclizing; 3′,5′-cyclic-AMP-forming). It catalyzes the following reaction: ATP = 3′,5′-cyclic AMP + diphosphate It has key regulatory roles in essentially all cells. It is the most polyphyletic known enzyme: six distinct classes have been described, all catalyzing the same reaction but representing unrelated gene families with no known sequence or structural homology. The best known class of adenylyl cyclases is class III or AC-III (Roman numerals are used for classes). AC-III occurs widely in eukaryotes and has important roles in many human tissues. All classes of adenylyl cyclase catalyse the conversion of adenosine triphosphate (ATP) to 3',5'-cyclic AMP (cAMP) and pyrophosphate. Magnesium ions are generally required and appear to be closely involved in the enzymatic mechanism. The cAMP produced by AC then serves as a regulatory signal via specific cAMP-binding proteins, either transcription factors, enzymes (e.g., cAMP-dependent kinases), or ion transporters. Classes Class I The first class of adenylyl cyclases occur in many bacteria including E. coli (as CyaA [unrelated to the Class II enzyme]). This was the first class of AC to be characterized. It was observed that E. coli deprived of glucose produce cAMP that serves as an internal signal to activate expression of genes for importing and metabolizing other sugars. cAMP exerts this effect by binding the transcription factor CRP, also known as CAP. Class I AC's are large cytosolic enzymes (~100 kDa) with a large regulatory domain (~50 kDa) that indirectly senses glucose levels. , no crystal structure is available for class I AC. Some indirect structural information is available for this class. It is known that the N-terminal half is the catalytic portion, and that it requires two Mg2+ ions. S103, S113, D114, D116 and W118 are the five absolutely essential residues. The class I catalytic domain () belongs to the same superfamily () as the palm domain of DNA polymerase beta (). Aligning its sequence onto the structure onto a related archaeal CCA tRNA nucleotidyltransferase () allows for assignment of the residues to specific functions: γ-phosphate binding, structural stabilization, DxD motif for metal ion binding, and finally ribose binding. Class II These adenylyl cyclases are toxins secreted by pathogenic bacteria such as Bacillus anthracis, Bordetella pertussis, Pseudomonas aeruginosa, and Vibrio vulnificus during infections. These bacteria also secrete proteins that enable the AC-II to enter host cells, where the exogenous AC activity undermines normal cellular processes. The genes for Class II ACs are known as cyaA, one of which is anthrax toxin. Several crystal structures are known for AC-II enzymes. Class III These adenylyl cyclases are the most familiar based on extensive study due to their important roles in human health. They are also found in some bacteria, notably Mycobacterium tuberculosis where they appear to have a key role in pathogenesis. Most AC-III's are integral membrane proteins involved in transducing extracellular signals into intracellular responses. A Nobel Prize was awarded to Earl Sutherland in 1971 for discovering the key role of AC-III in human liver, where adrenaline indirectly stimulates AC to mobilize stored energy in the "fight or flight" response. The effect of adrenaline is via a G protein signaling cascade, which transmits chemical signals from outside the cell across the membrane to the inside of the cell (cytoplasm). The outside signal (in this case, adrenaline) binds to a receptor, which transmits a signal to the G protein, which transmits a signal to adenylyl cyclase, which transmits a signal by converting adenosine triphosphate to cyclic adenosine monophosphate (cAMP). cAMP is known as a second messenger. Cyclic AMP is an important molecule in eukaryotic signal transduction, a so-called second messenger. Adenylyl cyclases are often activated or inhibited by G proteins, which are coupled to membrane receptors and thus can respond to hormonal or other stimuli. Following activation of adenylyl cyclase, the resulting cAMP acts as a second messenger by interacting with and regulating other proteins such as protein kinase A and cyclic nucleotide-gated ion channels. Photoactivated adenylyl cyclase (PAC) was discovered in Euglena gracilis and can be expressed in other organisms through genetic manipulation. Shining blue light on a cell containing PAC activates it and abruptly increases the rate of conversion of ATP to cAMP. This is a useful technique for researchers in neuroscience because it allows them to quickly increase the intracellular cAMP levels in particular neurons, and to study the effect of that increase in neural activity on the behavior of the organism. A green-light activated rhodopsin adenylyl cyclase (CaRhAC) has recently been engineered by modifying the nucleotide binding pocket of rhodopsin guanylyl cyclase. Structure Most class III adenylyl cyclases are transmembrane proteins with 12 transmembrane segments. The protein is organized with 6 transmembrane segments, then the C1 cytoplasmic domain, then another 6 membrane segments, and then a second cytoplasmic domain called C2. The important parts for function are the N-terminus and the C1 and C2 regions. The C1a and C2a subdomains are homologous and form an intramolecular 'dimer' that forms the active site. In Mycobacterium tuberculosis and many other bacterial cases, the AC-III polypeptide is only half as long, comprising one 6-transmembrane domain followed by a cytoplasmic domain, but two of these form a functional homodimer that resembles the mammalian architecture with two active sites. In non-animal class III ACs, the catalytic cytoplasmic domain is seen associated with other (not necessarily transmembrane) domains. Class III adenylyl cyclase domains can be further divided into four subfamilies, termed class IIIa through IIId. Animal membrane-bound ACs belong to class IIIa. Mechanism The reaction happens with two metal cofactors (Mg or Mn) coordinated to the two aspartate residues on C1. They perform a nucleophilic attack of the 3'-OH group of the ribose on the α-phosphoryl group of ATP. The two lysine and aspartate residues on C2 selects ATP over GTP for the substrate, so that the enzyme is not a guanylyl cyclase. A pair of arginine and asparagine residues on C2 stabilizes the transition state. In many proteins, these residues are nevertheless mutated while retaining the adenylyl cyclase activity. Types There are ten known isoforms of adenylyl cyclases in mammals: These are also sometimes called simply AC1, AC2, etc., and, somewhat confusingly, sometimes Roman numerals are used for these isoforms that all belong to the overall AC class III. They differ mainly in how they are regulated, and are differentially expressed in various tissues throughout mammalian development. Regulation Adenylyl cyclase is regulated by G proteins, which can be found in the monomeric form or the heterotrimeric form, consisting of three subunits. Adenylyl cyclase activity is controlled by heterotrimeric G proteins. The inactive or inhibitory form exists when the complex consists of alpha, beta, and gamma subunits, with GDP bound to the alpha subunit. In order to become active, a ligand must bind to the receptor and cause a conformational change. This conformational change causes the alpha subunit to dissociate from the complex and become bound to GTP. This G-alpha-GTP complex then binds to adenylyl cyclase and causes activation and the release of cAMP. Since a good signal requires the help of enzymes, which turn on and off signals quickly, there must also be a mechanism in which adenylyl cyclase deactivates and inhibits cAMP. The deactivation of the active G-alpha-GTP complex is accomplished rapidly by GTP hydrolysis due to the reaction being catalyzed by the intrinsic enzymatic activity of GTPase located in the alpha subunit. It is also regulated by forskolin, as well as other isoform-specific effectors: Isoforms I, III, and VIII are also stimulated by Ca2+/calmodulin. Isoforms V and VI are inhibited by Ca2+ in a calmodulin-independent manner. Isoforms II, IV and IX are stimulated by alpha subunit of the G protein. Isoforms I, V and VI are most clearly inhibited by Gi, while other isoforms show less dual regulation by the inhibitory G protein. Soluble AC (sAC) is not a transmembrane form and is not regulated by G proteins or forskolin, instead acts as a bicarbonate/pH sensor. It is anchored at various locations within the cell and, with phosphodiesterases, forms local cAMP signalling domains. In neurons, calcium-sensitive adenylyl cyclases are located next to calcium ion channels for faster reaction to Ca2+ influx; they are suspected of playing an important role in learning processes. This is supported by the fact that adenylyl cyclases are coincidence detectors, meaning that they are activated only by several different signals occurring together. In peripheral cells and tissues adenylyl cyclases appear to form molecular complexes with specific receptors and other signaling proteins in an isoform-specific manner. Function Individual transmembrane adenylyl cyclase isoforms have been linked to numerous physiological functions. Soluble adenylyl cyclase (sAC, AC10) has a critical role in sperm motility. Adenylyl cyclase has been implicated in memory formation, functioning as a coincidence detector. Class IV AC-IV was first reported in the bacterium Aeromonas hydrophila, and the structure of the AC-IV from Yersinia pestis has been reported. These are the smallest of the AC enzyme classes; the AC-IV (CyaB) from Yersinia is a dimer of 19 kDa subunits with no known regulatory components (). AC-IV forms a superfamily with mammalian thiamine-triphosphatase called CYTH (CyaB, thiamine triphosphatase). Classes V and VI These forms of AC have been reported in specific bacteria (Prevotella ruminicola and Rhizobium etli , respectively) and have not been extensively characterized. There are a few extra members (~400 in Pfam) known to be in class VI. Class VI enzymes possess a catalytic core similar to the one in Class III. Additional images References Further reading External links Interactive 3D views of Adenylate cyclase at EC 4.6.1 Cell signaling Signal transduction
Adenylyl cyclase
[ "Chemistry", "Biology" ]
2,427
[ "Biochemistry", "Neurochemistry", "Signal transduction" ]
2,546
https://en.wikipedia.org/wiki/Automated%20theorem%20proving
Automated theorem proving (also known as ATP or automated deduction) is a subfield of automated reasoning and mathematical logic dealing with proving mathematical theorems by computer programs. Automated reasoning over mathematical proof was a major motivating factor for the development of computer science. Logical foundations While the roots of formalized logic go back to Aristotle, the end of the 19th and early 20th centuries saw the development of modern logic and formalized mathematics. Frege's Begriffsschrift (1879) introduced both a complete propositional calculus and what is essentially modern predicate logic. His Foundations of Arithmetic, published in 1884, expressed (parts of) mathematics in formal logic. This approach was continued by Russell and Whitehead in their influential Principia Mathematica, first published 1910–1913, and with a revised second edition in 1927. Russell and Whitehead thought they could derive all mathematical truth using axioms and inference rules of formal logic, in principle opening up the process to automation. In 1920, Thoralf Skolem simplified a previous result by Leopold Löwenheim, leading to the Löwenheim–Skolem theorem and, in 1930, to the notion of a Herbrand universe and a Herbrand interpretation that allowed (un)satisfiability of first-order formulas (and hence the validity of a theorem) to be reduced to (potentially infinitely many) propositional satisfiability problems. In 1929, Mojżesz Presburger showed that the first-order theory of the natural numbers with addition and equality (now called Presburger arithmetic in his honor) is decidable and gave an algorithm that could determine if a given sentence in the language was true or false. However, shortly after this positive result, Kurt Gödel published On Formally Undecidable Propositions of Principia Mathematica and Related Systems (1931), showing that in any sufficiently strong axiomatic system, there are true statements that cannot be proved in the system. This topic was further developed in the 1930s by Alonzo Church and Alan Turing, who on the one hand gave two independent but equivalent definitions of computability, and on the other gave concrete examples of undecidable questions. First implementations Shortly after World War II, the first general-purpose computers became available. In 1954, Martin Davis programmed Presburger's algorithm for a JOHNNIAC vacuum-tube computer at the Institute for Advanced Study in Princeton, New Jersey. According to Davis, "Its great triumph was to prove that the sum of two even numbers is even". More ambitious was the Logic Theorist in 1956, a deduction system for the propositional logic of the Principia Mathematica, developed by Allen Newell, Herbert A. Simon and J. C. Shaw. Also running on a JOHNNIAC, the Logic Theorist constructed proofs from a small set of propositional axioms and three deduction rules: modus ponens, (propositional) variable substitution, and the replacement of formulas by their definition. The system used heuristic guidance, and managed to prove 38 of the first 52 theorems of the Principia. The "heuristic" approach of the Logic Theorist tried to emulate human mathematicians, and could not guarantee that a proof could be found for every valid theorem even in principle. In contrast, other, more systematic algorithms achieved, at least theoretically, completeness for first-order logic. Initial approaches relied on the results of Herbrand and Skolem to convert a first-order formula into successively larger sets of propositional formulae by instantiating variables with terms from the Herbrand universe. The propositional formulas could then be checked for unsatisfiability using a number of methods. Gilmore's program used conversion to disjunctive normal form, a form in which the satisfiability of a formula is obvious. Decidability of the problem Depending on the underlying logic, the problem of deciding the validity of a formula varies from trivial to impossible. For the common case of propositional logic, the problem is decidable but co-NP-complete, and hence only exponential-time algorithms are believed to exist for general proof tasks. For a first-order predicate calculus, Gödel's completeness theorem states that the theorems (provable statements) are exactly the semantically valid well-formed formulas, so the valid formulas are computably enumerable: given unbounded resources, any valid formula can eventually be proven. However, invalid formulas (those that are not entailed by a given theory), cannot always be recognized. The above applies to first-order theories, such as Peano arithmetic. However, for a specific model that may be described by a first-order theory, some statements may be true but undecidable in the theory used to describe the model. For example, by Gödel's incompleteness theorem, we know that any consistent theory whose axioms are true for the natural numbers cannot prove all first-order statements true for the natural numbers, even if the list of axioms is allowed to be infinite enumerable. It follows that an automated theorem prover will fail to terminate while searching for a proof precisely when the statement being investigated is undecidable in the theory being used, even if it is true in the model of interest. Despite this theoretical limit, in practice, theorem provers can solve many hard problems, even in models that are not fully described by any first-order theory (such as the integers). Related problems A simpler, but related, problem is proof verification, where an existing proof for a theorem is certified valid. For this, it is generally required that each individual proof step can be verified by a primitive recursive function or program, and hence the problem is always decidable. Since the proofs generated by automated theorem provers are typically very large, the problem of proof compression is crucial, and various techniques aiming at making the prover's output smaller, and consequently more easily understandable and checkable, have been developed. Proof assistants require a human user to give hints to the system. Depending on the degree of automation, the prover can essentially be reduced to a proof checker, with the user providing the proof in a formal way, or significant proof tasks can be performed automatically. Interactive provers are used for a variety of tasks, but even fully automatic systems have proved a number of interesting and hard theorems, including at least one that has eluded human mathematicians for a long time, namely the Robbins conjecture. However, these successes are sporadic, and work on hard problems usually requires a proficient user. Another distinction is sometimes drawn between theorem proving and other techniques, where a process is considered to be theorem proving if it consists of a traditional proof, starting with axioms and producing new inference steps using rules of inference. Other techniques would include model checking, which, in the simplest case, involves brute-force enumeration of many possible states (although the actual implementation of model checkers requires much cleverness, and does not simply reduce to brute force). There are hybrid theorem proving systems that use model checking as an inference rule. There are also programs that were written to prove a particular theorem, with a (usually informal) proof that if the program finishes with a certain result, then the theorem is true. A good example of this was the machine-aided proof of the four color theorem, which was very controversial as the first claimed mathematical proof that was essentially impossible to verify by humans due to the enormous size of the program's calculation (such proofs are called non-surveyable proofs). Another example of a program-assisted proof is the one that shows that the game of Connect Four can always be won by the first player. Applications Commercial use of automated theorem proving is mostly concentrated in integrated circuit design and verification. Since the Pentium FDIV bug, the complicated floating point units of modern microprocessors have been designed with extra scrutiny. AMD, Intel and others use automated theorem proving to verify that division and other operations are correctly implemented in their processors. Other uses of theorem provers include program synthesis, constructing programs that satisfy a formal specification. Automated theorem provers have been integrated with proof assistants, including Isabelle/HOL. Applications of theorem provers are also found in natural language processing and formal semantics, where they are used to analyze discourse representations. First-order theorem proving In the late 1960s agencies funding research in automated deduction began to emphasize the need for practical applications. One of the first fruitful areas was that of program verification whereby first-order theorem provers were applied to the problem of verifying the correctness of computer programs in languages such as Pascal, Ada, etc. Notable among early program verification systems was the Stanford Pascal Verifier developed by David Luckham at Stanford University. This was based on the Stanford Resolution Prover also developed at Stanford using John Alan Robinson's resolution principle. This was the first automated deduction system to demonstrate an ability to solve mathematical problems that were announced in the Notices of the American Mathematical Society before solutions were formally published. First-order theorem proving is one of the most mature subfields of automated theorem proving. The logic is expressive enough to allow the specification of arbitrary problems, often in a reasonably natural and intuitive way. On the other hand, it is still semi-decidable, and a number of sound and complete calculi have been developed, enabling fully automated systems. More expressive logics, such as higher-order logics, allow the convenient expression of a wider range of problems than first-order logic, but theorem proving for these logics is less well developed. Relationship with SMT There is substantial overlap between first-order automated theorem provers and SMT solvers. Generally, automated theorem provers focus on supporting full first-order logic with quantifiers, whereas SMT solvers focus more on supporting various theories (interpreted predicate symbols). ATPs excel at problems with lots of quantifiers, whereas SMT solvers do well on large problems without quantifiers. The line is blurry enough that some ATPs participate in SMT-COMP, while some SMT solvers participate in CASC. Benchmarks, competitions, and sources The quality of implemented systems has benefited from the existence of a large library of standard benchmark examples—the Thousands of Problems for Theorem Provers (TPTP) Problem Library—as well as from the CADE ATP System Competition (CASC), a yearly competition of first-order systems for many important classes of first-order problems. Some important systems (all have won at least one CASC competition division) are listed below. E is a high-performance prover for full first-order logic, but built on a purely equational calculus, originally developed in the automated reasoning group of Technical University of Munich under the direction of Wolfgang Bibel, and now at Baden-Württemberg Cooperative State University in Stuttgart. Otter, developed at the Argonne National Laboratory, is based on first-order resolution and paramodulation. Otter has since been replaced by Prover9, which is paired with Mace4. SETHEO is a high-performance system based on the goal-directed model elimination calculus, originally developed by a team under direction of Wolfgang Bibel. E and SETHEO have been combined (with other systems) in the composite theorem prover E-SETHEO. Vampire was originally developed and implemented at Manchester University by Andrei Voronkov and Kryštof Hoder. It is now developed by a growing international team. It has won the FOF division (among other divisions) at the CADE ATP System Competition regularly since 2001. Waldmeister is a specialized system for unit-equational first-order logic developed by Arnim Buch and Thomas Hillenbrand. It won the CASC UEQ division for fourteen consecutive years (1997–2010). SPASS is a first-order logic theorem prover with equality. This is developed by the research group Automation of Logic, Max Planck Institute for Computer Science. The Theorem Prover Museum is an initiative to conserve the sources of theorem prover systems for future analysis, since they are important cultural/scientific artefacts. It has the sources of many of the systems mentioned above. Popular techniques First-order resolution with unification Model elimination Method of analytic tableaux Superposition and term rewriting Model checking Mathematical induction Binary decision diagrams DPLL Higher-order unification Quantifier elimination Software systems Free software Alt-Ergo Automath CVC E IsaPlanner LCF Mizar NuPRL Paradox Prover9 PVS SPARK (programming language) Twelf Z3 Theorem Prover Proprietary software CARINE Wolfram Mathematica ResearchCyc See also Curry–Howard correspondence Symbolic computation Ramanujan machine Computer-aided proof Formal verification Logic programming Proof checking Model checking Proof complexity Computer algebra system Program analysis (computer science) General Problem Solver Metamath language for formalized mathematics De Bruijn factor Notes References II . External links A list of theorem proving tools Formal methods
Automated theorem proving
[ "Mathematics", "Engineering" ]
2,680
[ "Automated theorem proving", "Mathematical logic", "Computational mathematics", "Software engineering", "Formal methods" ]
2,547
https://en.wikipedia.org/wiki/Agent%20Orange
Agent Orange is a chemical herbicide and defoliant, one of the tactical use Rainbow Herbicides. It was used by the U.S. military as part of its herbicidal warfare program, Operation Ranch Hand, during the Vietnam War from 1961 to 1971. The U.S. was strongly influenced by the British who used Agent Orange during the Malayan Emergency. It is a mixture of equal parts of two herbicides, 2,4,5-T and 2,4-D. Agent Orange was produced in the United States beginning in the late 1940s and was used in industrial agriculture, and was also sprayed along railroads and power lines to control undergrowth in forests. During the Vietnam War, the U.S. military procured over , consisting of a fifty-fifty mixture of 2,4-D and dioxin-contaminated 2,4,5-T. Nine chemical companies produced it: Dow Chemical Company, Monsanto Company, Diamond Shamrock Corporation, Hercules Inc., Thompson Hayward Chemical Co., United States Rubber Company (Uniroyal), Thompson Chemical Co., Hoffman-Taff Chemicals, Inc., and Agriselect. The government of Vietnam says that up to four million people in Vietnam were exposed to the defoliant, and as many as three million people have suffered illness because of Agent Orange, while the Vietnamese Red Cross estimates that up to one million people were disabled or have health problems as a result of exposure to Agent Orange. The United States government has described these figures as unreliable, while documenting cases of leukemia, Hodgkin's lymphoma, and various kinds of cancer in exposed U.S. military veterans, however without having conclusively found either a causal relationship or a plausible biological carcinogenic mechanism. An epidemiological study done by the Centers for Disease Control and Prevention showed that there was an increase in the rate of birth defects of the children of military personnel who were exposed to Agent Orange. Agent Orange has also caused enormous environmental damage in Vietnam. Over or of forest were defoliated. Defoliants eroded tree cover and seedling forest stock, making reforestation difficult in numerous areas. Animal species diversity is sharply reduced in contrast with unsprayed areas. The environmental destruction caused by this defoliation has been described by Swedish Prime Minister Olof Palme, lawyers, historians and other academics as an ecocide. The use of Agent Orange in Vietnam resulted in numerous legal actions. The United Nations ratified United Nations General Assembly Resolution 31/72 and the Environmental Modification Convention. Lawsuits filed on behalf of both U.S. and Vietnamese veterans sought compensation for damages. Agent Orange was first used by British Commonwealth forces in Malaya during the Malayan Emergency. It was also used by the U.S. military in Laos and Cambodia during the Vietnam War because forests near the border with Vietnam were used by the Viet Cong. Chemical composition The active ingredient of Agent Orange was an equal mixture of two phenoxy herbicides – 2,4-dichlorophenoxyacetic acid (2,4-D) and 2,4,5-trichlorophenoxyacetic acid (2,4,5-T) – in iso-octyl ester form, which contained traces of the dioxin 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD). TCDD was a trace (typically 2–3 ppm, ranging from 50 ppb to 50 ppm) - but significant - contaminant of Agent Orange. Toxicology TCDD is the most toxic of the dioxins and is classified as a human carcinogen by the U.S. Environmental Protection Agency (EPA). The fat-soluble nature of TCDD causes it to enter the body readily through physical contact or ingestion. Dioxins accumulate easily in the food chain. Dioxin enters the body by attaching to a protein called the aryl hydrocarbon receptor (AhR), a transcription factor. When TCDD binds to AhR, the protein moves to the cell nucleus, where it influences gene expression. According to U.S. government reports, if not bound chemically to a biological surface such as soil, leaves or grass, Agent Orange dries quickly after spraying and breaks down within hours to days when exposed to sunlight and is no longer harmful. Development Several herbicides were developed as part of efforts by the United States and the United Kingdom to create herbicidal weapons for use during World War II. These included 2,4-D, 2,4,5-T, MCPA (2-methyl-4-chlorophenoxyacetic acid, 1414B and 1414A, recoded LN-8 and LN-32), and isopropyl phenylcarbamate (1313, recoded LN-33). In 1943, the United States Department of the Army contracted botanist (and later bioethicist) Arthur Galston, who discovered the defoliants later used in Agent Orange, and his employer University of Illinois Urbana-Champaign to study the effects of 2,4-D and 2,4,5-T on cereal grains (including rice) and broadleaf crops. While a graduate and post-graduate student at the University of Illinois, Galston's research and dissertation focused on finding a chemical means to make soybeans flower and fruit earlier. He discovered both that 2,3,5-triiodobenzoic acid (TIBA) would speed up the flowering of soybeans and that in higher concentrations it would defoliate the soybeans. From these studies arose the concept of using aerial applications of herbicides to destroy enemy crops to disrupt their food supply. In early 1945, the U.S. Army ran tests of various 2,4-D and 2,4,5-T mixtures at the Bushnell Army Airfield in Florida. As a result, the U.S. began a full-scale production of 2,4-D and 2,4,5-T and would have used it against Japan in 1946 during Operation Downfall if the war had continued. In the years after the war, the U.S. tested 1,100 compounds, and field trials of the more promising ones were done at British stations in India and Australia, in order to establish their effects in tropical conditions, as well as at the U.S. testing ground in Florida. Between 1950 and 1952, trials were conducted in Tanganyika, at Kikore and Stunyansa, to test arboricides and defoliants under tropical conditions. The chemicals involved were 2,4-D, 2,4,5-T, and endothall (3,6-endoxohexahydrophthalic acid). During 1952–53, the unit supervised the aerial spraying of 2,4,5-T in Kenya to assess the value of defoliants in the eradication of tsetse fly. Early use In Malaya, the local unit of Imperial Chemical Industries researched defoliants as weed killers for rubber plantations. Roadside ambushes by the Malayan National Liberation Army were a danger to the British Commonwealth forces during the Malayan Emergency, several trials were made to defoliate vegetation that might hide ambush sites, but hand removal was found cheaper. A detailed account of how the British experimented with the spraying of herbicides was written by two scientists, E. K. Woodford of Agricultural Research Council's Unit of Experimental Agronomy and H. G. H. Kearns of the University of Bristol. After the Malayan Emergency ended in 1960, the U.S. considered the British precedent in deciding that the use of defoliants was a legal tactic of warfare. Secretary of State Dean Rusk advised President John F. Kennedy that the British had established a precedent for warfare with herbicides in Malaya. Use in the Vietnam War In mid-1961, President Ngo Dinh Diem of South Vietnam asked the United States to help defoliate the lush jungle that was providing cover to his enemies. In August of that year, the Republic of Vietnam Air Force conducted herbicide operations with American help. Diem's request launched a policy debate in the White House and the State and Defense Departments. Many U.S. officials supported herbicide operations, pointing out that the British had already used herbicides and defoliants in Malaya during the 1950s. In November 1961, Kennedy authorized the start of Operation Ranch Hand, the codename for the United States Air Force's herbicide program in Vietnam. The herbicide operations were formally directed by the government of South Vietnam. During the Vietnam War, between 1962 and 1971, the United States military sprayed nearly of various chemicals – the "rainbow herbicides" and defoliants – in Vietnam, eastern Laos, and parts of Cambodia as part of Operation Ranch Hand, reaching its peak from 1967 to 1969. For comparison purposes, an olympic size pool holds approximately . As the British did in Malaya, the goal of the U.S. was to defoliate rural/forested land, depriving guerrillas of food and concealment and clearing sensitive areas such as around base perimeters and possible ambush sites along roads and canals. Samuel P. Huntington argued that the program was also a part of a policy of forced draft urbanization, which aimed to destroy the ability of peasants to support themselves in the countryside, forcing them to flee to the U.S.-dominated cities, depriving the guerrillas of their rural support base. Agent Orange was usually sprayed from helicopters or from low-flying C-123 Provider aircraft, fitted with sprayers and "MC-1 Hourglass" pump systems and chemical tanks. Spray runs were also conducted from trucks, boats, and backpack sprayers. Altogether, over of Agent Orange were applied. The first batch of herbicides was unloaded at Tan Son Nhut Air Base in South Vietnam, on January 9, 1962. U.S. Air Force records show at least 6,542 spraying missions took place over the course of Operation Ranch Hand. By 1971, 12 percent of the total area of South Vietnam had been sprayed with defoliating chemicals, at an average concentration of 13 times the recommended U.S. Department of Agriculture application rate for domestic use. In South Vietnam alone, an estimated of agricultural land was ultimately destroyed. In some areas, TCDD concentrations in soil and water were hundreds of times greater than the levels considered safe by the EPA. The campaign destroyed of upland and mangrove forests and thousands of square kilometres of crops. Overall, more than 20% of South Vietnam's forests were sprayed at least once over the nine-year period. 3.2% of South Vietnam's cultivated land was sprayed at least once between 1965 and 1971. 90% of herbicide use was directed at defoliation. The U.S. military began targeting food crops in October 1962, primarily using Agent Blue; the American public was not made aware of the crop destruction programs until 1965 (and it was then believed that crop spraying had begun that spring). In 1965, 42% of all herbicide spraying was dedicated to food crops. In 1965, members of the U.S. Congress were told, "crop destruction is understood to be the more important purpose ... but the emphasis is usually given to the jungle defoliation in public mention of the program." The first official acknowledgment of the programs came from the State Department in March 1966. When crops were destroyed, the Viet Cong would compensate for the loss of food by confiscating more food from local villages. Some military personnel reported being told they were destroying crops used to feed guerrillas, only to later discover, most of the destroyed food was actually produced to support the local civilian population. For example, according to Wil Verwey, 85% of the crop lands in Quang Ngai province were scheduled to be destroyed in 1970 alone. He estimated this would have caused famine and left hundreds of thousands of people without food or malnourished in the province. According to a report by the American Association for the Advancement of Science, the herbicide campaign had disrupted the food supply of more than 600,000 people by 1970. Many experts at the time, including plant physiologist and bioethicist Arthur Galston, opposed herbicidal warfare because of concerns about the side effects to humans and the environment by indiscriminately spraying the chemical over a wide area. As early as 1966, resolutions were introduced to the United Nations charging that the U.S. was violating the 1925 Geneva Protocol, which regulated the use of chemical and biological weapons in international conflicts. The U.S. defeated most of the resolutions, arguing that Agent Orange was not a chemical or a biological weapon as it was considered a herbicide and a defoliant and it was used in effort to destroy plant crops and to deprive the enemy of concealment and not meant to target human beings. The U.S. delegation argued that a weapon, by definition, is any device used to injure, defeat, or destroy living beings, structures, or systems, and Agent Orange did not qualify under that definition. It also argued that if the U.S. were to be charged for using Agent Orange, then the United Kingdom and its Commonwealth nations should be charged since they also used it widely during the Malayan Emergency in the 1950s. In 1969, the United Kingdom commented on the draft Resolution 2603 (XXIV):The evidence seems to us to be notably inadequate for the assertion that the use in war of chemical substances specifically toxic to plants is prohibited by international law. The environmental destruction caused by this defoliation has been described by Swedish Prime Minister Olof Palme, lawyers, historians and other academics as an ecocide. A study carried out by the Bionetic Research Laboratories between 1965 and 1968 found malformations in test animals caused by 2,4,5-T, a component of Agent Orange. The study was later brought to the attention of the White House in October 1969. Other studies reported similar results and the Department of Defense began to reduce the herbicide operation. On April 15, 1970, it was announced that the use of Agent Orange was suspended. Two brigades of the Americal Division in the summer of 1970 continued to use Agent Orange for crop destruction in violation of the suspension. An investigation led to disciplinary action against the brigade and division commanders because they had falsified reports to hide its use. Defoliation and crop destruction were completely stopped by June 30, 1971. Health effects There are various types of cancer associated with Agent Orange, including chronic B-cell leukemia, Hodgkin's lymphoma, multiple myeloma, non-Hodgkin's lymphoma, prostate cancer, respiratory cancer, lung cancer, and soft tissue sarcomas. However, these associations are spurious, and a review of the literature indicates that neither Agent Orange nor its contaminants are carcinogenic in humans. Vietnamese people The government of Vietnam states that 4 million of its citizens were exposed to Agent Orange, and as many as 3 million have suffered illnesses because of it; these figures include their children who were exposed. The Red Cross of Vietnam estimates that up to 1 million people are disabled or have health problems due to Agent Orange contamination. The United States government has challenged these figures as being unreliable. According to a study by Dr. Nguyen Viet Nhan, children in the areas where Agent Orange was used have been affected and have multiple health problems, including cleft palate, mental disabilities, hernias, and extra fingers and toes. In the 1970s, high levels of dioxin were found in the breast milk of South Vietnamese women, and in the blood of U.S. military personnel who had served in Vietnam. The most affected zones are the mountainous area along Truong Son (Long Mountains) and the border between Vietnam and Cambodia. The affected residents are living in substandard conditions with many genetic diseases. In 2006, Anh Duc Ngo and colleagues of the University of Texas Health Science Center published a meta-analysis that exposed a large amount of heterogeneity (different findings) between studies, a finding consistent with a lack of consensus on the issue. Despite this, statistical analysis of the studies they examined resulted in data that the increase in birth defects/relative risk (RR) from exposure to agent orange/dioxin "appears" to be on the order of 3 in Vietnamese-funded studies, but 1.29 in the rest of the world. There is data near the threshold of statistical significance suggesting Agent Orange contributes to still-births, cleft palate, and neural tube defects, with spina bifida being the most statistically significant defect. The large discrepancy in RR between Vietnamese studies and those in the rest of the world has been ascribed to bias in the Vietnamese studies. Twenty-eight of the former U.S. military bases in Vietnam where the herbicides were stored and loaded onto airplanes may still have high levels of dioxins in the soil, posing a health threat to the surrounding communities. Extensive testing for dioxin contamination has been conducted at the former U.S. airbases in Da Nang, Phù Cát District and Biên Hòa. Some of the soil and sediment on the bases have extremely high levels of dioxin requiring remediation. The Da Nang Air Base has dioxin contamination up to 350 times higher than international recommendations for action. The contaminated soil and sediment continue to affect the citizens of Vietnam, poisoning their food chain and causing illnesses, serious skin diseases and a variety of cancers in the lungs, larynx, and prostate. Vietnam veterans While in Vietnam, U.S. and Free World Military Assistance Forces soldiers were told not to worry about Agent Orange and were persuaded the chemical was harmless. After returning home, Vietnam veterans from all countries that served began to suspect their ill health or the instances of their wives having miscarriages or children born with birth defects might be related to Agent Orange and the other toxic herbicides to which they had been exposed in Vietnam. U.S veterans U.S. Veterans began to file claims in 1977 to the Department of Veterans Affairs for disability payments for health care for conditions they believed were associated with exposure to Agent Orange, or more specifically, dioxin, but their claims were denied unless they could prove the condition began when they were in the service or within one year of their discharge. In order to qualify for compensation, U.S. veterans must have served on or near the perimeters of military bases in Thailand during the Vietnam Era, where herbicides were tested and stored outside of Vietnam, veterans who were crew members on C-123 planes flown after the Vietnam War, or were associated with Department of Defense (DoD) projects to test, dispose of, or store herbicides in the U.S. By April 1993, the Department of Veterans Affairs had compensated only 486 victims, although it had received disability claims from 39,419 soldiers who had been exposed to Agent Orange while serving in Vietnam. In a November 2004 Zogby International poll of 987 people, 79% of respondents thought the U.S. chemical companies which produced Agent Orange defoliant should compensate U.S. soldiers who were affected by the toxic chemical used during the war in Vietnam and 51% said they supported compensation for Vietnamese Agent Orange victims. Australian and New Zealand veterans Several official investigations in Australia failed to prove otherwise even though extant American investigations had already established that defoliants were sprayed at U.S. airbases including Bien Hoa Air Base where Australian and New Zealand forces first served before being given their own Tactical area of responsibility (TAOR.) Even then, Australian and New Zealand non-military and military contributions saw personnel from both countries spread over Vietnam such as the hospitals at Bong Son and Qui Nhon, on secondments at various bases, and as flight crew and ground crew for flights into and out of Da Nang Air Base - all areas that were well-documented as having been sprayed. It wasn't until a group of Australian veterans produced official military records, maps, and mission data as proof that the TAOR controlled by Australian and New Zealand forces in Vietnam had been sprayed with the chemicals in the presence of personnel that the Australian government was forced to change their stance. Only in 1994 did the Australian government finally admit that it was true that defoliants had been used in areas of Vietnam where Australian forces operated and the effects of these may have been detrimental to some Vietnam veterans and their children. It was only in 2015 that the official Australian War Memorial accepted rewriting the official history of Australia's involvement in the Vietnam War to acknowledge that Australian soldiers were exposed to defoliants used in Vietnam. New Zealand was even slower to correct their error, with the government going as far as to deny the legitimacy of the Australian reports in a report called the "McLeod Report" published by Veterans Affairs NZ in 2001 thus infuriating New Zealand veterans and those associated with their cause. In 2006 progress was made in the form of a Memorandum of Understanding signed between the New Zealand government, representatives of New Zealand Vietnam veterans, and the Royal New Zealand Returned and Services' Association (RSA) for monetary compensation for New Zealand Vietnam veterans who have conditions as evidence of association with exposure to Agent Orange, as determined by the United States National Academy of Sciences. In 2008 the New Zealand government finally admitted that New Zealanders had in fact been exposed to Agent Orange while serving in Vietnam and the experience was responsible for detrimental health conditions in veterans and their children. Amendments to the memorandum made in 2021 meant that more veterans were eligible for an ex gratia payment of NZ$40,000. National Academy of Medicine (Institute of Medicine) Starting in the early 1990s, the federal government directed the Institute of Medicine (IOM), now known as the National Academy of Medicine, to issue reports every 2 years on the health effects of Agent Orange and similar herbicides. First published in 1994 and titled Veterans and Agent Orange, the IOM reports assess the risk of both cancer and non-cancer health effects. Each health effect is categorized by evidence of association based on available research data. The last update was published in 2016, entitled Veterans and Agent Orange: Update 2014. The report shows sufficient evidence of an association with soft tissue sarcoma; non-Hodgkin lymphoma (NHL); Hodgkin disease; Chronic lymphocytic leukemia (CLL); including hairy cell leukemia and other chronic B-cell leukemias. Limited or suggested evidence of an association was linked with respiratory cancers (lung, bronchus, trachea, larynx); prostate cancer; multiple myeloma; and bladder cancer. Numerous other cancers were determined to have inadequate or insufficient evidence of links to Agent Orange. The National Academy of Medicine has repeatedly concluded that any evidence suggestive of an association between Agent Orange and prostate cancer is, "limited because chance, bias, and confounding could not be ruled out with confidence." At the request of the Veterans Administration, the Institute Of Medicine evaluated whether service in these C-123 aircraft could have plausibly exposed soldiers and been detrimental to their health. Their report Post-Vietnam Dioxin Exposure in Agent Orange-Contaminated C-123 Aircraft confirmed it. U.S. Public Health Service Publications by the United States Public Health Service have shown that Vietnam veterans, overall, have increased rates of cancer, and nerve, digestive, skin, and respiratory disorders. The Centers for Disease Control and Prevention notes that in particular, there are higher rates of acute/chronic leukemia, Hodgkin's lymphoma and non-Hodgkin's lymphoma, throat cancer, prostate cancer, lung cancer, colon cancer, Ischemic heart disease, soft tissue sarcoma, and liver cancer. With the exception of liver cancer, these are the same conditions the U.S. Veterans Administration has determined may be associated with exposure to Agent Orange/dioxin and are on the list of conditions eligible for compensation and treatment. Military personnel who were involved in storage, mixture and transportation (including aircraft mechanics), and actual use of the chemicals were probably among those who received the heaviest exposures. Military members who served on Okinawa also claim to have been exposed to the chemical, but there is no verifiable evidence to corroborate these claims. Some studies have suggested that veterans exposed to Agent Orange may be more at risk of developing prostate cancer and potentially more than twice as likely to develop higher-grade, more lethal prostate cancers. However, a critical analysis of these studies and 35 others consistently found that there was no significant increase in prostate cancer incidence or mortality in those exposed to Agent Orange or 2,3,7,8-tetracholorodibenzo-p-dioxin. U.S. Veterans of Laos and Cambodia During the Vietnam War, the United States fought the North Vietnamese, and their allies, in Laos and Cambodia, including heavy bombing campaigns. They also sprayed large quantities of Agent Orange in each of those countries. According to one estimate, the U.S. dropped in Laos and in Cambodia. Because Laos and Cambodia were both officially neutral during the Vietnam War, the U.S. attempted to keep secret its military operations in those countries, from the American population and has largely avoided compensating American veterans and CIA personnel stationed in Cambodia and Laos who suffered permanent injuries as a result of exposure to Agent Orange there. One noteworthy exception, according to the U.S. Department of Labor, is a claim filed with the CIA by an employee of "a self-insured contractor to the CIA that was no longer in business." The CIA advised the Department of Labor that it "had no objections" to paying the claim and Labor accepted the claim for payment: Ecological impact About 17.8% or of the total forested area of Vietnam was sprayed during the war, which disrupted the ecological equilibrium. The persistent nature of dioxins, erosion caused by loss of tree cover, and loss of seedling forest stock meant that reforestation was difficult (or impossible) in many areas. Many defoliated forest areas were quickly invaded by aggressive pioneer species (such as bamboo and cogon grass), making forest regeneration difficult and unlikely. Animal species diversity was also impacted; in one study a Harvard biologist found 24 species of birds and 5 species of mammals in a sprayed forest, while in two adjacent sections of unsprayed forest there were, respectively, 145 and 170 species of birds and 30 and 55 species of mammals. Dioxins from Agent Orange have persisted in the Vietnamese environment since the war, settling in the soil and sediment and entering the food chain through animals and fish which feed in the contaminated areas. The movement of dioxins through the food web has resulted in bioconcentration and biomagnification. The areas most heavily contaminated with dioxins are former U.S. air bases. Sociopolitical impact American policy during the Vietnam War was to destroy crops, accepting the sociopolitical impact that that would have. The RAND Corporation's Memorandum 5446-ISA/ARPA states: "the fact that the VC [the Vietcong] obtain most of their food from the neutral rural population dictates the destruction of civilian crops ... if they are to be hampered by the crop destruction program, it will be necessary to destroy large portions of the rural economy – probably 50% or more". Crops were deliberately sprayed with Agent Orange and areas were bulldozed clear of vegetation forcing many rural civilians to cities. Legal and diplomatic proceedings International The extensive environmental damage that resulted from usage of the herbicide prompted the United Nations to pass Resolution 31/72 and ratify the Environmental Modification Convention. Many states do not regard this as a complete ban on the use of herbicides and defoliants in warfare, but it does require case-by-case consideration. Article 2(4) of Protocol III of the Convention on Certain Conventional Weapons contains the "Jungle Exception", which prohibits states from attacking forests or jungles "except if such natural elements are used to cover, conceal or camouflage combatants or military objectives or are military objectives themselves". This exception voids any protection of any military and civilian personnel from a napalm attack or something like Agent Orange, and it has been argued that it was clearly designed to cover situations like U.S. tactics in Vietnam. Class action lawsuit Since at least 1978, several lawsuits have been filed against the companies which produced Agent Orange, among them Dow Chemical, Monsanto, and Diamond Shamrock. In 1978, army veteran Paul Reutershan sued Dow Chemical for $10 million, after he was diagnosed with terminal cancer that he believed was a result of Agent Orange exposure. After Reutershan died in December 1978, his attorneys added additional plaintiffs and refiled the lawsuit as a class action. That lawsuit would eventually represent thousands of veterans, and was considered one of the largest and most complex lawsuits ever brought in the US at that time. Attorney Hy Mayerson was an early pioneer in Agent Orange litigation, working with environmental attorney Victor Yannacone in 1980 on the first class-action suits against wartime manufacturers of Agent Orange. In meeting Dr. Ronald A. Codario, one of the first civilian doctors to see affected patients, Mayerson, so impressed by the fact a physician would show so much interest in a Vietnam veteran, forwarded more than a thousand pages of information on Agent Orange and the effects of dioxin on animals and humans to Codario's office the day after he was first contacted by the doctor. The corporate defendants sought to escape culpability by blaming everything on the U.S. government. In 1980, Mayerson, with Sgt. Charles E. Hartz as their principal client, filed the first U.S. Agent Orange class-action lawsuit in Pennsylvania, for the injuries military personnel in Vietnam suffered through exposure to toxic dioxins in the defoliant. Attorney Mayerson co-wrote the brief that certified the Agent Orange Product Liability action as a class action, the largest ever filed as of its filing. Hartz's deposition was one of the first ever taken in America, and the first for an Agent Orange trial, for the purpose of preserving testimony at trial, as it was understood that Hartz would not live to see the trial because of a brain tumor that began to develop while he was a member of Tiger Force, special forces, and LRRPs in Vietnam. The firm also located and supplied critical research to the veterans' lead expert, Dr. Codario, including about 100 articles from toxicology journals dating back more than a decade, as well as data about where herbicides had been sprayed, what the effects of dioxin had been on animals and humans, and every accident in factories where herbicides were produced or dioxin was a contaminant of some chemical reaction. The chemical companies involved denied that there was a link between Agent Orange and the veterans' medical problems. However, on May 7, 1984, seven chemical companies settled the class-action suit out of court just hours before jury selection was to begin. The companies agreed to pay $180 million as compensation if the veterans dropped all claims against them. Slightly over 45% of the sum was ordered to be paid by Monsanto alone. Many veterans who were victims of Agent Orange exposure were outraged the case had been settled instead of going to court and felt they had been betrayed by the lawyers. "Fairness Hearings" were held in five major American cities, where veterans and their families discussed their reactions to the settlement and condemned the actions of the lawyers and courts, demanding the case be heard before a jury of their peers. Federal Judge Jack B. Weinstein refused the appeals, claiming the settlement was "fair and just". By 1989, the veterans' fears were confirmed when it was decided how the money from the settlement would be paid out. A totally disabled Vietnam veteran would receive a maximum of $12,000 spread out over the course of 10 years. Furthermore, by accepting the settlement payments, disabled veterans would become ineligible for many state benefits that provided far more monetary support than the settlement, such as food stamps, public assistance, and government pensions. A widow of a Vietnam veteran who died of Agent Orange exposure would receive $3,700. In 2004, Monsanto spokesman Jill Montgomery said Monsanto should not be liable at all for injuries or deaths caused by Agent Orange, saying: "We are sympathetic with people who believe they have been injured and understand their concern to find the cause, but reliable scientific evidence indicates that Agent Orange is not the cause of serious long-term health effects." On 22 August 2024, the Court of Appeal of Paris dismissed an appeal filed by Tran To Nga against 14 US corporations that supplied Agent Orange for the US army during the war in Vietnam. The lawyers said that Nga will take her case to France's highest appeals court. Only military veterans from the United States and its allies in the war have won compensation so far. Some of the agrochemical companies in the U.S. have compensated U.S. veterans, but not to Vietnamese victims. New Jersey Agent Orange Commission In 1980, New Jersey created the New Jersey Agent Orange Commission, the first state commission created to study its effects. The commission's research project in association with Rutgers University was called "The Pointman Project". It was disbanded by Governor Christine Todd Whitman in 1996. During the first phase of the project, commission researchers devised ways to determine trace dioxin levels in blood. Prior to this, such levels could only be found in the adipose (fat) tissue. The project studied dioxin (TCDD) levels in blood as well as in adipose tissue in a small group of Vietnam veterans who had been exposed to Agent Orange and compared them to those of a matched control group; the levels were found to be higher in the exposed group. The second phase of the project continued to examine and compare dioxin levels in various groups of Vietnam veterans, including Soldiers, Marines, and Brownwater Naval personnel. U.S. Congress In 1991, Congress enacted the Agent Orange Act, giving the Department of Veterans Affairs the authority to declare certain conditions "presumptive" to exposure to Agent Orange/dioxin, making these veterans who served in Vietnam eligible to receive treatment and compensation for these conditions. The same law required the National Academy of Sciences to periodically review the science on dioxin and herbicides used in Vietnam to inform the Secretary of Veterans Affairs about the strength of the scientific evidence showing association between exposure to Agent Orange/dioxin and certain conditions. The authority for the National Academy of Sciences reviews and addition of any new diseases to the presumptive list by the VA expired in 2015 under the sunset clause of the Agent Orange Act of 1991. Through this process, the list of 'presumptive' conditions has grown since 1991, and currently the U.S. Department of Veterans Affairs has listed prostate cancer, respiratory cancers, multiple myeloma, type II diabetes mellitus, Hodgkin's disease, non-Hodgkin's lymphoma, soft tissue sarcoma, chloracne, porphyria cutanea tarda, peripheral neuropathy, chronic lymphocytic leukemia, and spina bifida in children of veterans exposed to Agent Orange as conditions associated with exposure to the herbicide. This list now includes B cell leukemias, such as hairy cell leukemia, Parkinson's disease and ischemic heart disease, these last three having been added on August 31, 2010. Several highly placed individuals in government are voicing concerns about whether some of the diseases on the list should, in fact, actually have been included. In 2011, an appraisal of the 20-year long Air Force Health Study that began in 1982 indicates that the results of the AFHS as they pertain to Agent Orange, do not provide evidence of disease in the Operation Ranch Hand veterans caused by "their elevated levels of exposure to Agent Orange". The VA initially denied the applications of post-Vietnam C-123 aircrew veterans because as veterans without "boots on the ground" service in Vietnam, they were not covered under VA's interpretation of "exposed". In June 2015, the Secretary of Veterans Affairs issued an Interim final rule providing presumptive service connection for post-Vietnam C-123 aircrews, maintenance staff and aeromedical evacuation crews. The VA now provides medical care and disability compensation for the recognized list of Agent Orange illnesses. U.S.–Vietnamese government negotiations In 2002, Vietnam and the U.S. held a joint conference on Human Health and Environmental Impacts of Agent Orange. Following the conference, the U.S. National Institute of Environmental Health Sciences (NIEHS) began scientific exchanges between the U.S. and Vietnam, and began discussions for a joint research project on the human health impacts of Agent Orange. These negotiations broke down in 2005, when neither side could agree on the research protocol and the research project was canceled. More progress has been made on the environmental front. In 2005, the first U.S.-Vietnam workshop on remediation of dioxin was held. Starting in 2005, the EPA began to work with the Vietnamese government to measure the level of dioxin at the Da Nang Air Base. Also in 2005, the Joint Advisory Committee on Agent Orange, made up of representatives of Vietnamese and U.S. government agencies, was established. The committee has been meeting yearly to explore areas of scientific cooperation, technical assistance and environmental remediation of dioxin. A breakthrough in the diplomatic stalemate on this issue occurred as a result of United States President George W. Bush's state visit to Vietnam in November 2006. In the joint statement, President Bush and President Triet agreed "further joint efforts to address the environmental contamination near former dioxin storage sites would make a valuable contribution to the continued development of their bilateral relationship." On May 25, 2007, President Bush signed the U.S. Troop Readiness, Veterans' Care, Katrina Recovery, and Iraq Accountability Appropriations Act, 2007 into law for the wars in Iraq and Afghanistan that included an earmark of $3 million specifically for funding for programs for the remediation of dioxin 'hotspots' on former U.S. military bases, and for public health programs for the surrounding communities; some authors consider this to be completely inadequate, pointing out that the Da Nang Airbase alone will cost $14 million to clean up, and that three others are estimated to require $60 million for cleanup. The appropriation was renewed in the fiscal year 2009 and again in FY 2010. An additional $12 million was appropriated in the fiscal year 2010 in the Supplemental Appropriations Act and a total of $18.5 million appropriated for fiscal year 2011. Secretary of State Hillary Clinton stated during a visit to Hanoi in October 2010 that the U.S. government would begin work on the clean-up of dioxin contamination at the Da Nang Airbase. In June 2011, a ceremony was held at Da Nang airport to mark the start of U.S.-funded decontamination of dioxin hotspots in Vietnam. Thirty-two million dollars has so far been allocated by the U.S. Congress to fund the program. A $43 million project began in the summer of 2012, as Vietnam and the U.S. forge closer ties to boost trade and counter China's rising influence in the disputed South China Sea. Vietnamese victims class action lawsuit in U.S. courts On January 31, 2004, a victim's rights group, the Vietnam Association for Victims of Agent Orange/dioxin (VAVA), filed a lawsuit in the United States District Court for the Eastern District of New York in Brooklyn, against several U.S. companies for liability in causing personal injury, by developing, and producing the chemical, and claimed that the use of Agent Orange violated the 1907 Hague Convention on Land Warfare, 1925 Geneva Protocol, and the 1949 Geneva Conventions. Dow Chemical and Monsanto were the two largest producers of Agent Orange for the U.S. military and were named in the suit, along with the dozens of other companies (Diamond Shamrock, Uniroyal, Thompson Chemicals, Hercules, etc.). On March 10, 2005, Judge Jack B. Weinstein of the Eastern District – who had presided over the 1984 U.S. veterans class-action lawsuit – dismissed the lawsuit, ruling there was no legal basis for the plaintiffs' claims. He concluded Agent Orange was not considered a poison under international humanitarian law at the time of its use by the U.S.; the U.S. was not prohibited from using it as a herbicide; and the companies which produced the substance were not liable for the method of its use by the government. In the dismissal statement issued by Weinstein, he wrote "The prohibition extended only to gases deployed for their asphyxiating or toxic effects on man, not to herbicides designed to affect plants that may have unintended harmful side-effects on people." Author and activist George Jackson had written previously that:If the Americans were guilty of war crimes for using Agent Orange in Vietnam, then the British would be also guilty of war crimes as well since they were the first nation to deploy the use of herbicides and defoliants in warfare and used them on a large scale throughout the Malayan Emergency. Not only was there no outcry by other states in response to the United Kingdom's use, but the U.S. viewed it as establishing a precedent for the use of herbicides and defoliants in jungle warfare.The U.S. government was also not a party in the lawsuit because of sovereign immunity, and the court ruled the chemical companies, as contractors of the U.S. government, shared the same immunity. The case was appealed and heard by the Second Circuit Court of Appeals in Manhattan on June 18, 2007. Three judges on the court upheld Weinstein's ruling to dismiss the case. They ruled that, though the herbicides contained a dioxin (a known poison), they were not intended to be used as a poison on humans. Therefore, they were not considered a chemical weapon and thus not a violation of international law. A further review of the case by the entire panel of judges of the Court of Appeals also confirmed this decision. The lawyers for the Vietnamese filed a petition to the U.S. Supreme Court to hear the case. On March 2, 2009, the Supreme Court denied certiorari and declined to reconsider the ruling of the Court of Appeals. Help for those affected in Vietnam To assist those who have been affected by Agent Orange/dioxin, the Vietnamese have established "peace villages", which each host between 50 and 100 victims, giving them medical and psychological help. As of 2006, there were 11 such villages, thus granting some social protection to fewer than a thousand victims. U.S. veterans of the war in Vietnam and individuals who are aware and sympathetic to the impacts of Agent Orange have supported these programs in Vietnam. An international group of veterans from the U.S. and its allies during the Vietnam War working with their former enemy—veterans from the Vietnam Veterans Association—established the Vietnam Friendship Village outside of Hanoi. The center provides medical care, rehabilitation and vocational training for children and veterans from Vietnam who have been affected by Agent Orange. In 1998, The Vietnam Red Cross established the Vietnam Agent Orange Victims Fund to provide direct assistance to families throughout Vietnam that have been affected. In 2003, the Vietnam Association of Victims of Agent Orange (VAVA) was formed. In addition to filing the lawsuit against the chemical companies, VAVA provides medical care, rehabilitation services and financial assistance to those injured by Agent Orange. The Vietnamese government provides small monthly stipends to more than 200,000 Vietnamese believed affected by the herbicides; this totaled $40.8 million in 2008. The Vietnam Red Cross has raised more than $22 million to assist the ill or disabled, and several U.S. foundations, United Nations agencies, European governments and nongovernmental organizations have given a total of about $23 million for site cleanup, reforestation, health care and other services to those in need. Vuong Mo of the Vietnam News Agency described one of the centers: May is 13, but she knows nothing, is unable to talk fluently, nor walk with ease due to for her bandy legs. Her father is dead and she has four elder brothers, all mentally retarded ... The students are all disabled, retarded and of different ages. Teaching them is a hard job. They are of the 3rd grade but many of them find it hard to do the reading. Only a few of them can. Their pronunciation is distorted due to their twisted lips and their memory is quite short. They easily forget what they've learned ... In the Village, it is quite hard to tell the kids' exact ages. Some in their twenties have a physical statures as small as the 7- or 8-years-old. They find it difficult to feed themselves, much less have mental ability or physical capacity for work. No one can hold back the tears when seeing the heads turning round unconsciously, the bandy arms managing to push the spoon of food into the mouths with awful difficulty ... Yet they still keep smiling, singing in their great innocence, at the presence of some visitors, craving for something beautiful. On June 16, 2010, members of the U.S.-Vietnam Dialogue Group on Agent Orange/Dioxin unveiled a comprehensive 10-year Declaration and Plan of Action to address the toxic legacy of Agent Orange and other herbicides in Vietnam. The Plan of Action was released as an Aspen Institute publication and calls upon the U.S. and Vietnamese governments to join with other governments, foundations, businesses, and nonprofits in a partnership to clean up dioxin "hot spots" in Vietnam and to expand humanitarian services for people with disabilities there. On September 16, 2010, Senator Patrick Leahy acknowledged the work of the Dialogue Group by releasing a statement on the floor of the United States Senate. The statement urges the U.S. government to take the Plan of Action's recommendations into account in developing a multi-year plan of activities to address the Agent Orange/dioxin legacy. Use outside of Vietnam Australia In 2008, Australian researcher Jean Williams claimed that cancer rates in Innisfail, Queensland, were 10 times higher than the state average because of secret testing of Agent Orange by the Australian military scientists during the Vietnam War. Williams, who had won the Order of Australia medal for her research on the effects of chemicals on U.S. war veterans, based her allegations on Australian government reports found in the Australian War Memorial's archives. A former soldier, Ted Bosworth, backed up the claims, saying that he had been involved in the secret testing. Neither Williams nor Bosworth have produced verifiable evidence to support their claims. The Queensland health department determined that cancer rates in Innisfail were no higher than those in other parts of the state. Canada The U.S. military, with the permission of the Canadian government, tested herbicides, including Agent Orange, in the forests near Canadian Forces Base Gagetown in New Brunswick. In 2007, the government of Canada offered a one-time ex gratia payment of $20,000 as compensation for Agent Orange exposure at CFB Gagetown. On July 12, 2005, Merchant Law Group, on behalf of over 1,100 Canadian veterans and civilians who were living in and around CFB Gagetown, filed a lawsuit to pursue class action litigation concerning Agent Orange and Agent Purple with the Federal Court of Canada. On August 4, 2009, the case was rejected by the court, citing lack of evidence. In 2007, the Canadian government announced that a research and fact-finding program initiated in 2005 had found the base was safe. A legislative commission in the State of Maine found in 2024 that the Canadian investigation was "incorrect, biased, and based upon, in some cases, incomplete data and poor study design—at times exacerbated by the rapid period in which these reports were required to be conducted and issued." On February 17, 2011, the Toronto Star revealed that Agent Orange had been employed to clear extensive plots of Crown land in Northern Ontario. The Toronto Star reported that, "records from the 1950s, 1960s and 1970s show forestry workers, often students and junior rangers, spent weeks at a time as human markers holding red, helium-filled balloons on fishing lines while low-flying planes sprayed toxic herbicides including an infamous chemical mixture known as Agent Orange on the brush and the boys below." In response to the Toronto Star article, the Ontario provincial government launched a probe into the use of Agent Orange. Guam An analysis of chemicals present in the island's soil, together with resolutions passed by Guam's legislature, suggest that Agent Orange was among the herbicides routinely used on and around Andersen Air Force Base and Naval Air Station Agana. Despite the evidence, the Department of Defense continues to deny that Agent Orange was stored or used on Guam. Several Guam veterans have collected evidence to assist in their disability claims for direct exposure to dioxin containing herbicides such as 2,4,5-T which are similar to the illness associations and disability coverage that has become standard for those who were harmed by the same chemical contaminant of Agent Orange used in Vietnam. South Korea Agent Orange was used in South Korea in the late 1960s and in 1999, about 20,000 South Koreans filed two separated lawsuits against U.S. companies, seeking more than $5 billion in damages. After losing a decision in 2002, they filed an appeal. In January 2006, the South Korean Appeals Court ordered Dow Chemical and Monsanto to pay $62 million in compensation to about 6,800 people. The ruling acknowledged that "the defendants failed to ensure safety as the defoliants manufactured by the defendants had higher levels of dioxins than standard", and, quoting the U.S. National Academy of Science report, declared that there was a "causal relationship" between Agent Orange and a range of diseases, including several cancers. The judges failed to acknowledge "the relationship between the chemical and peripheral neuropathy, the disease most widespread among Agent Orange victims". In 2011, the United States local press KPHO-TV in Phoenix, Arizona, alleged that in 1978 that the United States Army had buried 250 55-gallon drums () of Agent Orange in Camp Carroll, the U.S. Army base located in Gyeongsangbuk-do, South Korea. Currently, veterans who provide evidence meeting VA requirements for service in Vietnam and who can medically establish that anytime after this 'presumptive exposure' they developed any medical problems on the list of presumptive diseases, may receive compensation from the VA. Certain veterans who served in South Korea and are able to prove they were assigned to certain specified around the Korean Demilitarized Zone, during a specific time frame are afforded similar presumption. New Zealand The use of Agent Orange has been controversial in New Zealand, because of the exposure of New Zealand troops in Vietnam and because of the production of herbicide used in Agent Orange which has been alleged at various times to have been exported for use in the Vietnam War and to other users by the Ivon Watkins-Dow chemical plant in Paritutu, New Plymouth. What is fact is that from 1962 until 1987, 2,4,5T herbicide was manufactured at the Ivon Watkins-Dow plant for domestic use in New Zealand. It was widely used by farmers and in New Zealand agriculture as a weed killer. This fact was the basis of a 2005 New Zealand Media story that claimed that the herbicide had been allegedly exported to U.S. military bases in South East Asia. However the claim was not proven, a fact which the Media did not subsequently report. There have been continuing claims, as yet unproven, that the suburb of Paritutu has also been polluted. However, the agriscience company Corteva (which split from DowDupont in 2019) agreed to clean up the Paritutu site in September 2022. Philippines Herbicide persistence studies of Agents Orange and White were conducted in the Philippines. Johnston Atoll The U.S. Air Force operation to remove Herbicide Orange from Vietnam in 1972 was named Operation Pacer IVY, while the operation to destroy the Agent Orange stored at Johnston Atoll in 1977 was named Operation Pacer HO. Operation Pacer IVY collected Agent Orange in South Vietnam and removed it in 1972 aboard the ship MV Transpacific for storage on Johnston Atoll. The EPA reports that of Herbicide Orange was stored at Johnston Island in the Pacific and at Gulfport, Mississippi. Research and studies were initiated to find a safe method to destroy the materials, and it was discovered they could be incinerated safely under special conditions of temperature and dwell time. However, these herbicides were expensive, and the Air Force wanted to resell its surplus instead of dumping it at sea. Among many methods tested, a possibility of salvaging the herbicides by reprocessing and filtering out the TCDD contaminant with carbonized (charcoaled) coconut fibers. This concept was then tested in 1976 and a pilot plant constructed at Gulfport. From July to September 1977 during Operation Pacer HO, the entire stock of Agent Orange from both Herbicide Orange storage sites at Gulfport and Johnston Atoll was subsequently incinerated in four separate burns in the vicinity of Johnston Island aboard the Dutch-owned waste incineration ship . As of 2004, some records of the storage and disposition of Agent Orange at Johnston Atoll have been associated with the historical records of Operation Red Hat. Okinawa, Japan There have been dozens of reports in the press about use and/or storage of military formulated herbicides on Okinawa that are based upon statements by former U.S. service members that had been stationed on the island, photographs, government records, and unearthed storage barrels. The U.S. Department of Defense has denied these allegations with statements by military officials and spokespersons, as well as a January 2013 report authored by Dr. Alvin Young that was released in April 2013. In particular, the 2013 report rebuts articles written by journalist Jon Mitchell as well as a statement from "An Ecological Assessment of Johnston Atoll" a 2003 publication produced by the United States Army Chemical Materials Agency that states, "in 1972, the U.S. Air Force also brought about 25,000 200L drums () of the chemical, Herbicide Orange (HO) to Johnston Island that originated from Vietnam and was stored on Okinawa." The 2013 report states: "The authors of the [2003] report were not DoD employees, nor were they likely familiar with the issues surrounding Herbicide Orange or its actual history of transport to the Island." and detailed the transport phases and routes of Agent Orange from Vietnam to Johnston Atoll, none of which included Okinawa. Further official confirmation of restricted (dioxin containing) herbicide storage on Okinawa appeared in a 1971 Fort Detrick report titled "Historical, Logistical, Political and Technical Aspects of the Herbicide/Defoliant Program", which mentions that the environmental statement should consider "Herbicide stockpiles elsewhere in PACOM (Pacific Command) U.S. Government restricted materials Thailand and Okinawa (Kadena AFB)." The 2013 DoD report says that the environmental statement urged by the 1971 report was published in 1974 as "The Department of Air Force Final Environmental Statement", and that the latter did not find Agent Orange was held in either Thailand or Okinawa. Thailand Agent Orange was tested by the United States in Thailand during the Vietnam War. In 1999, buried drums were uncovered and confirmed to be Agent Orange. Workers who uncovered the drums fell ill while upgrading the airport near Hua Hin District, 100 km south of Bangkok. Vietnam-era veterans whose service involved duty on or near the perimeters of military bases in Thailand anytime between February 28, 1961, and May 7, 1975, may have been exposed to herbicides and may qualify for VA benefits. A declassified Department of Defense report written in 1973, suggests that there was a significant use of herbicides on the fenced-in perimeters of military bases in Thailand to remove foliage that provided cover for enemy forces. In 2013, the VA determined that herbicides used on the Thailand base perimeters may have been tactical and procured from Vietnam, or a strong, commercial type resembling tactical herbicides. United States The University of Hawaiʻi has acknowledged extensive testing of Agent Orange on behalf of the United States Department of Defense in Hawaii along with mixtures of Agent Orange on Hawaii Island in 1966 and on Kaua'i Island in 1967–1968; testing and storage in other U.S. locations has been documented by the United States Department of Veterans Affairs. In 1971, the C-123 aircraft used for spraying Agent Orange were returned to the United States and assigned various East Coast USAF Reserve squadrons, and then employed in traditional airlift missions between 1972 and 1982. In 1994, testing by the Air Force identified some former spray aircraft as "heavily contaminated" with dioxin residue. Inquiries by aircrew veterans in 2011 brought a decision by the U.S. Department of Veterans Affairs opining that not enough dioxin residue remained to injure these post-Vietnam War veterans. On 26 January 2012, the U.S. Center For Disease Control's Agency for Toxic Substances and Disease Registry challenged this with their finding that former spray aircraft were indeed contaminated and the aircrews exposed to harmful levels of dioxin. In response to veterans' concerns, the VA in February 2014 referred the C-123 issue to the Institute of Medicine for a special study, with results released on January 9, 2015. In 1978, the EPA suspended spraying of Agent Orange in national forests. Agent Orange was sprayed on thousands of acres of brush in the Tennessee Valley for 15 years before scientists discovered the herbicide was dangerous. Monroe County, Tennessee, is one of the locations known to have been sprayed according to the Tennessee Valley Authority. Forty-four remote acres were sprayed with Agent Orange along power lines throughout the National Forest. In 1983, New Jersey declared a Passaic River production site to be a state of emergency. The dioxin pollution in the Passaic River dates back to the Vietnam era, when Diamond Alkali manufactured it in a factory along the river. The tidal river carried dioxin upstream and down, contaminating a stretch of riverbed in one of New Jersey's most populous areas. A December 2006 Department of Defense report listed Agent Orange testing, storage, and disposal sites at 32 locations throughout the United States, Canada, Thailand, Puerto Rico, Korea, and in the Pacific Ocean. The Veteran Administration has also acknowledged that Agent Orange was used domestically by U.S. forces in test sites throughout the United States. Eglin Air Force Base in Florida was one of the primary testing sites throughout the 1960s. Cleanup programs In February 2012, Monsanto agreed to settle a case covering dioxin contamination around a plant in Nitro, West Virginia, that had manufactured Agent Orange. Monsanto agreed to pay up to $9 million for cleanup of affected homes, $84 million for medical monitoring of people affected, and the community's legal fees. On 9 August 2012, the United States and Vietnam began a cooperative cleaning up of the toxic chemical on part of Da Nang International Airport, marking the first time the U.S. government has been involved in cleaning up Agent Orange in Vietnam. Danang was the primary storage site of the chemical. Two other cleanup sites the United States and Vietnam are looking at is Biên Hòa, in the southern province of Đồng Nai is a hotspot for dioxin and so is Phù Cát airport in the central province of Bình Định, says U.S. Ambassador to Vietnam David Shear. According to the Vietnamese newspaper Nhân Dân, the U.S. government provided $41 million to the project. As of 2017, some of soil have been cleaned. The Naval Construction Battalion Center at Gulfport, Mississippi was the largest storage site in the United States for agent orange. It was about in size and was still being cleaned up in 2013. In 2016, the EPA laid out its plan for cleaning up an stretch of the Passaic River in New Jersey, with an estimated cost of $1.4 billion. The contaminants reached to Newark Bay and other waterways, according to the EPA, which has designated the area a Superfund site. Since destruction of the dioxin requires high temperatures over , the destruction process is energy intensive. See also Environmental impact of war Orange Crush (song) Rainbow herbicides Scorched earth Teratology Vietnam Syndrome References Citations General and cited references NTP (National Toxicology Program); "Toxicology and Carcinogenesis Studies of 2,3,7,8-Tetrachlorodibenzo-p-dioxin (TCDD) in Female Harlan Sprague-Dawley Rats (Gavage Studies)", CASRN 1746–01–6, April 2006. – both of Young's books were commissioned by the U.S. Department of Defense, Office of the Deputy Under Secretary of Defense (Installations and Environment) Further reading Books see pages 245–252. with a foreword by Howard Zinn. Government/NGO reports "Agent Orange in Vietnam: Recent Developments in Remediation: Testimony of Ms. Tran Thi Hoan", Subcommittee on Asia, the Pacific and the Global Environment, U.S. House of Representatives, Committee on Foreign Affairs. July 15, 2010 "Agent Orange in Vietnam: Recent Developments in Remediation: Testimony of Dr. Nguyen Thi Ngoc Phuong", Subcommittee on Asia, the Pacific and the Global Environment, U.S. House of Representatives, Committee on Foreign Affairs. July 15, 2010 Agent Orange Policy, American Public Health Association, 2007 "Assessment of the health risk of dioxins", World Health Organization/International Programme on Chemical Safety, 1998 Operation Ranch Hand: Herbicides In Southeast Asia History of Operation Ranch Hand, 1983 "Agent Orange Dioxin Contamination in the Environment and Food Chain at Key Hotspots in Viet Nam" Boivin, TG, et al., 2011 News Fawthrop, Tom; Agent of suffering, Guardian, February 10, 2008 Cox, Paul; "The Legacy of Agent Orange is a Continuing Focus of VVAW", The Veteran, Vietnam Veterans Against the War, Volume 38, Number 2, Fall 2008. Barlett, Donald P. and Steele, James B.; "Monsanto's Harvest of Fear", Vanity Fair May 2008 Quick, Ben "The Boneyard" Orion Magazine, March/April 2008 Cheng, Eva; "Vietnam's Agent Orange victims call for solidarity", Green Left Weekly, September 28, 2005 Children and the Vietnam War 30–40 years after the use of Agent Orange Tokar, Brian; "Monsanto: A Checkered History", Z Magazine, March 1999 Video Agent Orange: The Last Battle. Dir. Stephanie Jobe, Adam Scholl. DVD. 2005 HADES. Dir. Caroline Delerue, screenplay by Mauro Bellanova, 2011 Short film by James Nguyen. Vietnam: The Secret Agent. Dir. Jacki Ochs, 1984 Photojournalism CNN Al Jazeera America External links U.S. Environmental Protection Agency – Dioxin Web site Agent Orange Office of Public Health and Environmental Hazards, U.S. Department of Veteran Affairs Report from the National Birth Defects Registry - Birth Defects in Vietnam Veterans' Children "An Ecological Assessment of Johnston Atoll" 1961 introductions Aftermath of the Vietnam War Articles containing video clips Auxinic herbicides Carcinogens Defoliants Dioxins Environmental controversies Environmental impact of war Imperial Chemical Industries Malayan Emergency Medical controversies Military equipment of the Vietnam War Monsanto Operation Ranch Hand Teratogens Canada and the Vietnam War Environmental racism
Agent Orange
[ "Chemistry", "Environmental_science" ]
13,173
[ "Defoliants", "Toxicology", "Toxins by chemical classification", "Chemical weapons", "Carcinogens", "Teratogens", "Dioxins" ]
2,551
https://en.wikipedia.org/wiki/Astronomical%20year%20numbering
Astronomical year numbering is based on AD/CE year numbering, but follows normal decimal integer numbering more strictly. Thus, it has a year 0; the years before that are designated with negative numbers and the years after that are designated with positive numbers. Astronomers use the Julian calendar for years before 1582, including the year 0, and the Gregorian calendar for years after 1582, as exemplified by Jacques Cassini (1740), Simon Newcomb (1898) and Fred Espenak (2007). The prefix AD and the suffixes CE, BC or BCE (Common Era, Before Christ or Before Common Era) are dropped. The year 1 BC/BCE is numbered 0, the year 2 BC is numbered −1, and in general the year n BC/BCE is numbered "−(n − 1)" (a negative number equal to 1 − n). The numbers of AD/CE years are not changed and are written with either no sign or a positive sign; thus in general n AD/CE is simply n or +n. For normal calculation a number zero is often needed, here most notably when calculating the number of years in a period that spans the epoch; the end years need only be subtracted from each other. The system is so named due to its use in astronomy. Few other disciplines outside history deal with the time before year 1, some exceptions being dendrochronology, archaeology and geology, the latter two of which use 'years before the present'. Although the absolute numerical values of astronomical and historical years only differ by one before year 1, this difference is critical when calculating astronomical events like eclipses or planetary conjunctions to determine when historical events which mention them occurred. Usage of the year zero In his Rudolphine Tables (1627), Johannes Kepler used a prototype of year zero which he labeled Christi (Christ's) between years labeled Ante Christum (Before Christ) and Post Christum (After Christ) on the mean motion tables for the Sun, Moon, Saturn, Jupiter, Mars, Venus and Mercury. In 1702, the French astronomer Philippe de la Hire used a year he labeled at the end of years labeled ante Christum (BC), and immediately before years labeled post Christum (AD) on the mean motion pages in his Tabulæ Astronomicæ, thus adding the designation 0 to Kepler's Christi. Finally, in 1740 the French astronomer Jacques Cassini , who is traditionally credited with the invention of year zero, completed the transition in his Tables astronomiques, simply labeling this year 0, which he placed at the end of Julian years labeled avant Jesus-Christ (before Jesus Christ or BC), and immediately before Julian years labeled après Jesus-Christ (after Jesus Christ or AD). Cassini gave the following reasons for using a year 0: Fred Espenak of NASA lists 50 phases of the Moon within year 0, showing that it is a full year, not an instant in time. Jean Meeus gives the following explanation: Signed years without the year zero Although he used the usual French terms "avant J.-C." (before Jesus Christ) and "après J.-C." (after Jesus Christ) to label years elsewhere in his book, the Byzantine historian Venance Grumel (1890–1967) used negative years (identified by a minus sign, −) to label BC years and unsigned positive years to label AD years in a table. He may have done so to save space and he put no year 0 between them. Version 1.0 of the XML Schema language, often used to describe data interchanged between computers in XML, includes built-in primitive datatypes date and dateTime. Although these are defined in terms of ISO 8601 which uses the proleptic Gregorian calendar and therefore should include a year 0, the XML Schema specification states that there is no year zero. Version 1.1 of the defining recommendation realigned the specification with ISO 8601 by including a year zero, despite the problems arising from the lack of backward compatibility. See also Julian day, another calendar commonly used by astronomers Astronomical chronology Holocene calendar ISO 8601 References Calendar eras Chronology Specific calendars Year numbering
Astronomical year numbering
[ "Physics", "Astronomy" ]
860
[ "Time in astronomy", "Chronology", "Physical quantities", "Time", "Spacetime" ]
2,553
https://en.wikipedia.org/wiki/Ab%20urbe%20condita
Ab urbe condita (; 'from the founding of the City'), or (; 'in the year since the city's founding'), abbreviated as AUC or AVC, expresses a date in years since 753 BC, the traditional founding of Rome. It is an expression used in antiquity and by classical historians to refer to a given year in Ancient Rome. In reference to the traditional year of the foundation of Rome, the year 1 BC would be written AUC 753, whereas AD 1 would be AUC 754. The foundation of the Roman Empire in 27 BC would be AUC 727. The current year AD  would be AUC . Usage of the term was more common during the Renaissance, when editors sometimes added AUC to Roman manuscripts they published, giving the false impression that the convention was commonly used in antiquity. In reality, the dominant method of identifying years in Roman times was to name the two consuls who held office that year. In late antiquity, regnal years were also in use, as in Roman Egypt during the Diocletian era after AD 293, and in the Byzantine Empire from AD 537, following a decree by Justinian. Use Prior to the Roman state's adoption of the Varronian chronology – created by Titus Pomponius Atticus and Marcus Terentius Varro – there were many different dates posited for when the city was founded. This state of confusion required, for one to use an AUC date, one to pick a date as canonical. The Varronian chronology, constructed from fragmentary sources and demonstrably about four years off of absolute events , placed the founding of the city on 21 April 753 BC. This date, likely arrived at by mechanical calculation but accepted with a variance of one year by the Augustan-era , has become the traditional date. From the time of Claudius () onward, this calculation superseded other contemporary calculations. Celebrating the anniversary of the city became part of imperial propaganda. Claudius was the first to hold magnificent celebrations in honor of the anniversary of the city, in AD 47, the eight hundredth year from the founding of the city. Hadrian, in AD 121, and Antoninus Pius, in AD 147 and AD 148, held similar celebrations respectively. In AD 248, Philip the Arab celebrated Rome's first millennium, together with Ludi saeculares for Rome's alleged tenth saeculum. Coins from his reign commemorate the celebrations. A coin by a contender for the imperial throne, Pacatianus, explicitly states "[y]ear one thousand and first", which is an indication that the citizens of the empire had a sense of the beginning of a new era, a Sæculum Novum. Calendar era The Anno Domini (AD) year numbering was developed by a monk named Dionysius Exiguus in Rome in , as a result of his work on calculating the date of Easter. Dionysius did not use the AUC convention, but instead based his calculations on the Diocletian era. This convention had been in use since AD 293, the year of the tetrarchy, as it became impractical to use regnal years of the current emperor. In his Easter table, the year was equated with the 248th regnal year of Diocletian. The table counted the years starting from the presumed birth of Christ, rather than the accession of the emperor Diocletian on 20 November AD 284 or, as stated by Dionysius: "sed magis elegimus ab incarnatione Domini nostri Jesu Christi annorum tempora praenotare" ("but rather we choose to name the times of the years from the incarnation of our Lord Jesus Christ"). Blackburn and Holford-Strevens review interpretations of Dionysius which place the Incarnation in 2 BC, 1 BC, or AD 1. The year AD 1 corresponds to AUC 754, based on the epoch of Varro. Thus: See also Calendar era History of Italy List of Latin phrases Roman calendar References External links 1st-century BC establishments in the Roman Empire 8th century BC in the Roman Kingdom Calendar eras Chronology Latin words and phrases Roman calendar Diocletian
Ab urbe condita
[ "Physics" ]
872
[ "Spacetime", "Chronology", "Physical quantities", "Time" ]
2,594
https://en.wikipedia.org/wiki/Ant
Ants are eusocial insects of the family Formicidae and, along with the related wasps and bees, belong to the order Hymenoptera. Ants evolved from vespoid wasp ancestors in the Cretaceous period. More than 13,800 of an estimated total of 22,000 species have been classified. They are easily identified by their geniculate (elbowed) antennae and the distinctive node-like structure that forms their slender waists. Ants form colonies that range in size from a few dozen individuals often living in small natural cavities to highly organised colonies that may occupy large territories with sizeable nest that consist of millions of individuals or into the hundreds of millions in super colonies. Typical colonies consist of various castes of sterile, wingless females, most of which are workers (ergates), as well as soldiers (dinergates) and other specialised groups. Nearly all ant colonies also have some fertile males called "drones" and one or more fertile females called "queens" (gynes). The colonies are described as superorganisms because the ants appear to operate as a unified entity, collectively working together to support the colony. Ants have colonised almost every landmass on Earth. The only places lacking indigenous ants are Antarctica and a few remote or inhospitable islands. Ants thrive in moist tropical ecosystems and may exceed the combined biomass of wild birds and mammals. Their success in so many environments has been attributed to their social organisation and their ability to modify habitats, tap resources, and defend themselves. Their long co-evolution with other species has led to mimetic, commensal, parasitic, and mutualistic relationships. Ant societies have division of labour, communication between individuals, and an ability to solve complex problems. These parallels with human societies have long been an inspiration and subject of study. Many human cultures make use of ants in cuisine, medication, and rites. Some species are valued in their role as biological pest control agents. Their ability to exploit resources may bring ants into conflict with humans, however, as they can damage crops and invade buildings. Some species, such as the red imported fire ant (Solenopsis invicta) of South America, are regarded as invasive species in other parts of the world, establishing themselves in areas where they have been introduced accidentally. Etymology The word ant and the archaic word emmet are derived from , of Middle English, which come from of Old English; these are all related to Low Saxon , and varieties (Old Saxon ) and to German (Old High German ). All of these words come from West Germanic *, and the original meaning of the word was "the biter" (from Proto-Germanic , "off, away" + "cut"). The family name Formicidae is derived from the Latin ("ant") from which the words in other Romance languages, such as the Portuguese , Italian , Spanish , Romanian , and French are derived. It has been hypothesised that a Proto-Indo-European word *morwi- was the root for Sanskrit vamrah, Greek μύρμηξ mýrmēx, Old Church Slavonic mraviji, Old Irish moirb, Old Norse maurr, Dutch mier, Swedish myra, Danish myre, Middle Dutch miere, and Crimean Gothic miera. Taxonomy and evolution The family Formicidae belongs to the order Hymenoptera, which also includes sawflies, bees, and wasps. Ants evolved from a lineage within the stinging wasps, and a 2013 study suggests that they are a sister group of the Apoidea. However, since Apoidea is a superfamily, ants must be upgraded to the same rank. A more detailed basic taxonomy was proposed in 2020. Three species of the extinct mid-Cretaceous genera Camelomecia and Camelosphecia were placed outside of the Formicidae, in a separate clade within the general superfamily Formicoidea, which, together with Apoidea, forms the higher-ranking group Formicapoidina. Fernández et al. (2021) suggest that the common ancestors of ants and apoids within the Formicapoidina probably existed as early as in the end of the Jurassic period, before divergence in the Cretaceous. In 1966, E. O. Wilson and his colleagues identified the fossil remains of an ant (Sphecomyrma) that lived in the Cretaceous period. The specimen, trapped in amber dating back to around 92 million years ago, has features found in some wasps, but not found in modern ants. The oldest fossils of ants date to the mid-Cretaceous, around 100 million years ago, which belong to extinct stem-groups such as the Haidomyrmecinae, Sphecomyrminae and Zigrasimeciinae, with modern ant subfamilies appearing towards the end of the Cretaceous around 80–70 million years ago. Ants diversified extensively during the Angiosperm Terrestrial Revolution and assumed ecological dominance around 60 million years ago. Some groups, such as the Leptanillinae and Martialinae, are suggested to have diversified from early primitive ants that were likely to have been predators underneath the surface of the soil. During the Cretaceous period, a few species of primitive ants ranged widely on the Laurasian supercontinent (the Northern Hemisphere). Their representation in the fossil record is poor, in comparison to the populations of other insects, representing only about 1% of fossil evidence of insects in the era. Ants became dominant after adaptive radiation at the beginning of the Paleogene period. By the Oligocene and Miocene, ants had come to represent 20–40% of all insects found in major fossil deposits. Of the species that lived in the Eocene epoch, around one in 10 genera survive to the present. Genera surviving today comprise 56% of the genera in Baltic amber fossils (early Oligocene), and 92% of the genera in Dominican amber fossils (apparently early Miocene). Termites live in colonies and are sometimes called "white ants", but termites are only distantly related to ants. They are the sub-order Isoptera, and together with cockroaches, they form the order Blattodea. Blattodeans are related to mantids, crickets, and other winged insects that do not undergo complete metamorphosis. Like ants, termites are eusocial, with sterile workers, but they differ greatly in the genetics of reproduction. The similarity of their social structure to that of ants is attributed to convergent evolution. Velvet ants look like large ants, but are wingless female wasps. Distribution and diversity Ants have a cosmopolitan distribution. They are found on all continents except Antarctica, and only a few large islands, such as Greenland, Iceland, parts of Polynesia and the Hawaiian Islands lack native ant species. Ants occupy a wide range of ecological niches and exploit many different food resources as direct or indirect herbivores, predators and scavengers. Most ant species are omnivorous generalists, but a few are specialist feeders. There is considerable variation in ant abundance across habitats, peaking in the moist tropics to nearly six times that found in less suitable habitats. Their ecological dominance has been examined primarily using estimates of their biomass: myrmecologist E. O. Wilson had estimated in 2009 that at any one time the total number of ants was between one and ten quadrillion (short scale) (i.e., between 1015 and 1016) and using this estimate he had suggested that the total biomass of all the ants in the world was approximately equal to the total biomass of the entire human race. More careful estimates made in 2022 which take into account regional variations puts the global ant contribution at 12 megatons of dry carbon, which is about 20% of the total human contribution, but greater than that of the wild birds and mammals combined. This study also puts a conservative estimate of the ants at about 20 × 1015 (20 quadrillion). Ants range in size from , the largest species being the fossil Titanomyrma giganteum, the queen of which was long with a wingspan of . Ants vary in colour; most ants are yellow to red or brown to black, but a few species are green and some tropical species have a metallic lustre. More than 13,800 species are currently known (with upper estimates of the potential existence of about 22,000; see the article List of ant genera), with the greatest diversity in the tropics. Taxonomic studies continue to resolve the classification and systematics of ants. Online databases of ant species, including AntWeb and the Hymenoptera Name Server, help to keep track of the known and newly described species. The relative ease with which ants may be sampled and studied in ecosystems has made them useful as indicator species in biodiversity studies. Morphology Ants are distinct in their morphology from other insects in having geniculate (elbowed) antennae, metapleural glands, and a strong constriction of their second abdominal segment into a node-like petiole. The head, mesosoma, and metasoma are the three distinct body segments (formally tagmata). The petiole forms a narrow waist between their mesosoma (thorax plus the first abdominal segment, which is fused to it) and gaster (abdomen less the abdominal segments in the petiole). The petiole may be formed by one or two nodes (the second alone, or the second and third abdominal segments). Tergosternal fusion, when the tergite and sternite of a segment fuse together, can occur partly or fully on the second, third and fourth abdominal segment and is used in identification. Fourth abdominal tergosternal fusion was formerly used as character that defined the poneromorph subfamilies, Ponerinae and relatives within their clade, but this is no longer considered a synapomorphic character. Like other arthropods, ants have an exoskeleton, an external covering that provides a protective casing around the body and a point of attachment for muscles, in contrast to the internal skeletons of humans and other vertebrates. Insects do not have lungs; oxygen and other gases, such as carbon dioxide, pass through their exoskeleton via tiny valves called spiracles. Insects also lack closed blood vessels; instead, they have a long, thin, perforated tube along the top of the body (called the "dorsal aorta") that functions like a heart, and pumps haemolymph toward the head, thus driving the circulation of the internal fluids. The nervous system consists of a ventral nerve cord that runs the length of the body, with several ganglia and branches along the way reaching into the extremities of the appendages. Head An ant's head contains many sensory organs. Like most insects, ants have compound eyes made from numerous tiny lenses attached together. Ant eyes are good for acute movement detection, but do not offer a high resolution image. They also have three small ocelli (simple eyes) on the top of the head that detect light levels and polarization. Compared to vertebrates, ants tend to have blurrier eyesight, particularly in smaller species, and a few subterranean taxa are completely blind. However, some ants, such as Australia's bulldog ant, have excellent vision and are capable of discriminating the distance and size of objects moving nearly a meter away. Based on experiments conducted to test their ability to differentiate between selected wavelengths of light, some ant species such as Camponotus blandus, Solenopsis invicta, and Formica cunicularia are thought to possess a degree of colour vision. Two antennae ("feelers") are attached to the head; these organs detect chemicals, air currents, and vibrations; they also are used to transmit and receive signals through touch. The head has two strong jaws, the mandibles, used to carry food, manipulate objects, construct nests, and for defence. In some species, a small pocket (infrabuccal chamber) inside the mouth stores food, so it may be passed to other ants or their larvae. Mesosoma Both the legs and wings of the ant are attached to the mesosoma ("thorax"). The legs terminate in a hooked claw which allows them to hook on and climb surfaces. Only reproductive ants (queens and males) have wings. Queens shed their wings after the nuptial flight, leaving visible stubs, a distinguishing feature of queens. In a few species, wingless queens (ergatoids) and males occur. Metasoma The metasoma (the "abdomen") of the ant houses important internal organs, including those of the reproductive, respiratory (tracheae), and excretory systems. Workers of many species have their egg-laying structures modified into stings that are used for subduing prey and defending their nests. Polymorphism In the colonies of a few ant species, there are physical castes—workers in distinct size-classes, called minor (micrergates), median, and major ergates (macrergates). Often, the larger ants have disproportionately larger heads, and correspondingly stronger mandibles. Although formally known as dinergates, such individuals are sometimes called "soldier" ants because their stronger mandibles make them more effective in fighting, although they still are workers and their "duties" typically do not vary greatly from the minor or median workers. In a few species, the median workers are absent, creating a sharp divide between the minors and majors. Weaver ants, for example, have a distinct bimodal size distribution. Some other species show continuous variation in the size of workers. The smallest and largest workers in Carebara diversa show nearly a 500-fold difference in their dry weights. Workers cannot mate; however, because of the haplodiploid sex-determination system in ants, workers of a number of species can lay unfertilised eggs that become fully fertile, haploid males. The role of workers may change with their age and in some species, such as honeypot ants, young workers are fed until their gasters are distended, and act as living food storage vessels. These food storage workers are called repletes. For instance, these replete workers develop in the North American honeypot ant Myrmecocystus mexicanus. Usually the largest workers in the colony develop into repletes; and, if repletes are removed from the colony, other workers become repletes, demonstrating the flexibility of this particular polymorphism. This polymorphism in morphology and behaviour of workers initially was thought to be determined by environmental factors such as nutrition and hormones that led to different developmental paths; however, genetic differences between worker castes have been noted in Acromyrmex sp. These polymorphisms are caused by relatively small genetic changes; differences in a single gene of Solenopsis invicta can decide whether the colony will have single or multiple queens. The Australian jack jumper ant (Myrmecia pilosula) has only a single pair of chromosomes (with the males having just one chromosome as they are haploid), the lowest number known for any animal, making it an interesting subject for studies in the genetics and developmental biology of social insects. Genome size Genome size is a fundamental characteristic of an organism. Ants have been found to have tiny genomes, with the evolution of genome size suggested to occur through loss and accumulation of non-coding regions, mainly transposable elements, and occasionally by whole genome duplication. This may be related to colonisation processes, but further studies are needed to verify this. Life cycle The life of an ant starts from an egg; if the egg is fertilised, the progeny will be female diploid, if not, it will be male haploid. Ants develop by complete metamorphosis with the larva stages passing through a pupal stage before emerging as an adult. The larva is largely immobile and is fed and cared for by workers. Food is given to the larvae by trophallaxis, a process in which an ant regurgitates liquid food held in its crop. This is also how adults share food, stored in the "social stomach". Larvae, especially in the later stages, may also be provided solid food, such as trophic eggs, pieces of prey, and seeds brought by workers. The larvae grow through a series of four or five moults and enter the pupal stage. The pupa has the appendages free and not fused to the body as in a butterfly pupa. The differentiation into queens and workers (which are both female), and different castes of workers, is influenced in some species by the nutrition the larvae obtain. Genetic influences and the control of gene expression by the developmental environment are complex and the determination of caste continues to be a subject of research. Winged male ants, called drones (termed "aner" in old literature), emerge from pupae along with the usually winged breeding females. Some species, such as army ants, have wingless queens. Larvae and pupae need to be kept at fairly constant temperatures to ensure proper development, and so often are moved around among the various brood chambers within the colony. A new ergate spends the first few days of its adult life caring for the queen and young. She then graduates to digging and other nest work, and later to defending the nest and foraging. These changes are sometimes fairly sudden, and define what are called temporal castes. Such age-based task-specialization or polyethism has been suggested as having evolved due to the high casualties involved in foraging and defence, making it an acceptable risk only for ants who are older and likely to die sooner from natural causes. In the Brazilian ant Forelius pusillus, the nest entrance is closed from the outside to protect the colony from predatory ant species at sunset each day. About one to eight workers seal the nest entrance from the outside and they have no chance of returning to the nest and are in effect sacrificed. Whether these seemingly suicidal workers are older workers has not been determined. Ant colonies can be long-lived. The queens can live for up to 30 years, and workers live from 1 to 3 years. Males, however, are more transitory, being quite short-lived and surviving for only a few weeks. Ant queens are estimated to live 100 times as long as solitary insects of a similar size. Ants are active all year long in the tropics; however, in cooler regions, they survive the winter in a state of dormancy known as hibernation. The forms of inactivity are varied and some temperate species have larvae going into the inactive state (diapause), while in others, the adults alone pass the winter in a state of reduced activity. Reproduction A wide range of reproductive strategies have been noted in ant species. Females of many species are known to be capable of reproducing asexually through thelytokous parthenogenesis. Secretions from the male accessory glands in some species can plug the female genital opening and prevent females from re-mating. Most ant species have a system in which only the queen and breeding females have the ability to mate. Contrary to popular belief, some ant nests have multiple queens, while others may exist without queens. Workers with the ability to reproduce are called "gamergates" and colonies that lack queens are then called gamergate colonies; colonies with queens are said to be queen-right. Drones can also mate with existing queens by entering a foreign colony, such as in army ants. When the drone is initially attacked by the workers, it releases a mating pheromone. If recognized as a mate, it will be carried to the queen to mate. Males may also patrol the nest and fight others by grabbing them with their mandibles, piercing their exoskeleton and then marking them with a pheromone. The marked male is interpreted as an invader by worker ants and is killed. Most ants are univoltine, producing a new generation each year. During the species-specific breeding period, winged females and winged males, known to entomologists as alates, leave the colony in what is called a nuptial flight. The nuptial flight usually takes place in the late spring or early summer when the weather is hot and humid. Heat makes flying easier and freshly fallen rain makes the ground softer for mated queens to dig nests. Males typically take flight before the females. Males then use visual cues to find a common mating ground, for example, a landmark such as a pine tree to which other males in the area converge. Males secrete a mating pheromone that females follow. Males will mount females in the air, but the actual mating process usually takes place on the ground. Females of some species mate with just one male but in others they may mate with as many as ten or more different males, storing the sperm in their spermathecae. The genus Cardiocondyla have species with both winged and wingless males, where the latter will only mate with females living in the same nest. Some species in the genus have lost winged males completely, and only produce wingless males. In C. elegans, workers may transport newly emerged queens to other conspecific nests where the wingless males from unrelated colonies can mate with them, a behavioural adaptation that may reduce the chances of inbreeding. Mated females then seek a suitable place to begin a colony. There, they break off their wings using their tibial spurs and begin to lay and care for eggs. The females can selectively fertilise future eggs with the sperm stored to produce diploid workers or lay unfertilized haploid eggs to produce drones. The first workers to hatch, known as nanitics, are weaker and smaller than later workers but they begin to serve the colony immediately. They enlarge the nest, forage for food, and care for the other eggs. Species that have multiple queens may have a queen leaving the nest along with some workers to found a colony at a new site, a process akin to swarming in honeybees. Nests, colonies, and supercolonies The typical ant species has a colony occupying a single nest, housing one or more queens, where the brood is raised. There are however more than 150 species of ants in 49 genera that are known to have colonies consisting of multiple spatially separated nests. These polydomous (as opposed to monodomous) colonies have food and workers moving between the nests. Membership to a colony is identified by the response of worker ants which identify whether another individual belongs to their own colony or not. A signature cocktail of body surface chemicals (also known as cuticular hydrocarbons or CHCs) forms the so-called colony odor which other members can recognize. Some ant species appear to be less discriminating and in the Argentine ant Linepithema humile, workers carried from a colony anywhere in the southern US and Mexico are acceptable within other colonies in the same region. Similarly workers from colonies that have established in Europe are accepted by any other colonies within Europe but not by the colonies in the Americas. The interpretation of these observations has been debated and some have been termed these large populations as supercolonies while others have termed the populations as unicolonial. Behaviour and ecology Communication Ants communicate with each other using pheromones, sounds, and touch. Since most ants live on the ground, they use the soil surface to leave pheromone trails that may be followed by other ants. In species that forage in groups, a forager that finds food marks a trail on the way back to the colony; this trail is followed by other ants, these ants then reinforce the trail when they head back with food to the colony. When the food source is exhausted, no new trails are marked by returning ants and the scent slowly dissipates. This behaviour helps ants deal with changes in their environment. For instance, when an established path to a food source is blocked by an obstacle, the foragers leave the path to explore new routes. If an ant is successful, it leaves a new trail marking the shortest route on its return. Successful trails are followed by more ants, reinforcing better routes and gradually identifying the best path. Ants use pheromones for more than just making trails. A crushed ant emits an alarm pheromone that sends nearby ants into an attack frenzy and attracts more ants from farther away. Several ant species even use "propaganda pheromones" to confuse enemy ants and make them fight among themselves. Pheromones are produced by a wide range of structures including Dufour's glands, poison glands and glands on the hindgut, pygidium, rectum, sternum, and hind tibia. Pheromones also are exchanged, mixed with food, and passed by trophallaxis, transferring information within the colony. This allows other ants to detect what task group (e.g., foraging or nest maintenance) other colony members belong to. In ant species with queen castes, when the dominant queen stops producing a specific pheromone, workers begin to raise new queens in the colony. Some ants produce sounds by stridulation, using the gaster segments and their mandibles. Sounds may be used to communicate with colony members or with other species. Defence Ants attack and defend themselves by biting and, in many species, by stinging often injecting or spraying chemicals. Bullet ants (Paraponera), located in Central and South America, are considered to have the most painful sting of any insect, although it is usually not fatal to humans. This sting is given the highest rating on the Schmidt sting pain index. The sting of jack jumper ants can be lethal for humans, and an antivenom has been developed for it. Fire ants, Solenopsis spp., are unique in having a venom sac containing piperidine alkaloids. Their stings are painful and can be dangerous to hypersensitive people. Formicine ants secrete a poison from their glands, made mainly of formic acid. Trap-jaw ants of the genus Odontomachus are equipped with mandibles called trap-jaws, which snap shut faster than any other predatory appendages within the animal kingdom. One study of Odontomachus bauri recorded peak speeds of between , with the jaws closing within 130 microseconds on average. The ants were also observed to use their jaws as a catapult to eject intruders or fling themselves backward to escape a threat. Before striking, the ant opens its mandibles extremely widely and locks them in this position by an internal mechanism. Energy is stored in a thick band of muscle and explosively released when triggered by the stimulation of sensory organs resembling hairs on the inside of the mandibles. The mandibles also permit slow and fine movements for other tasks. Trap-jaws also are seen in other ponerines such as Anochetus, as well as some genera in the tribe Attini, such as Daceton, Orectognathus, and Strumigenys, which are viewed as examples of convergent evolution. A Malaysian species of ant in the Camponotus cylindricus group has enlarged mandibular glands that extend into their gaster. If combat takes a turn for the worse, a worker may perform a final act of suicidal altruism by rupturing the membrane of its gaster, causing the content of its mandibular glands to burst from the anterior region of its head, spraying a poisonous, corrosive secretion containing acetophenones and other chemicals that immobilise small insect attackers. The worker subsequently dies. In addition to defence against predators, ants need to protect their colonies from pathogens. Secretions from the metapleural gland, unique to the ants, produce a complex range of chemicals including several with antibiotic properties. Some worker ants maintain the hygiene of the colony and their activities include undertaking or necrophoresis, the disposal of dead nest-mates. Oleic acid has been identified as the compound released from dead ants that triggers necrophoric behaviour in Atta mexicana while workers of Linepithema humile react to the absence of characteristic chemicals (dolichodial and iridomyrmecin) present on the cuticle of their living nestmates to trigger similar behaviour. In Megaponera analis, injured ants are treated by nestmastes with secretions from their metapleural glands which protect them from infection. Camponotus ants do not have a metapleural gland and Camponotus maculatus as well as C. floridanus workers have been found to amputate the affected legs of nestmates when the femur is injured. A femur injury carries a greater risk of infection unlike a tibia injury. Nests may be protected from physical threats such as flooding and overheating by elaborate nest architecture. Workers of Cataulacus muticus, an arboreal species that lives in plant hollows, respond to flooding by drinking water inside the nest, and excreting it outside. Camponotus anderseni, which nests in the cavities of wood in mangrove habitats, deals with submergence under water by switching to anaerobic respiration. Learning Many animals can learn behaviours by imitation, but ants may be the only group apart from mammals where interactive teaching has been observed. A knowledgeable forager of Temnothorax albipennis can lead a naïve nest-mate to newly discovered food by the process of tandem running. The follower obtains knowledge through its leading tutor. The leader is acutely sensitive to the progress of the follower and slows down when the follower lags and speeds up when the follower gets too close. Controlled experiments with colonies of Cerapachys biroi suggest that an individual may choose nest roles based on her previous experience. An entire generation of identical workers was divided into two groups whose outcome in food foraging was controlled. One group was continually rewarded with prey, while it was made certain that the other failed. As a result, members of the successful group intensified their foraging attempts while the unsuccessful group ventured out fewer and fewer times. A month later, the successful foragers continued in their role while the others had moved to specialise in brood care. Nest construction Complex nests are built by many ant species, but other species are nomadic and do not build permanent structures. Ants may form subterranean nests or build them on trees. These nests may be found in the ground, under stones or logs, inside logs, hollow stems, or even acorns. The materials used for construction include soil and plant matter, and ants carefully select their nest sites; Temnothorax albipennis will avoid sites with dead ants, as these may indicate the presence of pests or disease. They are quick to abandon established nests at the first sign of threats. The army ants of South America, such as the Eciton burchellii species, and the driver ants of Africa do not build permanent nests, but instead, alternate between nomadism and stages where the workers form a temporary nest (bivouac) from their own bodies, by holding each other together. Weaver ant (Oecophylla spp.) workers build nests in trees by attaching leaves together, first pulling them together with bridges of workers and then inducing their larvae to produce silk as they are moved along the leaf edges. Similar forms of nest construction are seen in some species of Polyrhachis. Formica polyctena, among other ant species, constructs nests that maintain a relatively constant interior temperature that aids in the development of larvae. The ants maintain the nest temperature by choosing the location, nest materials, controlling ventilation and maintaining the heat from solar radiation, worker activity and metabolism, and in some moist nests, microbial activity in the nest materials. Some ant species, such as those that use natural cavities, can be opportunistic and make use of the controlled micro-climate provided inside human dwellings and other artificial structures to house their colonies and nest structures. Cultivation of food Most ants are generalist predators, scavengers, and indirect herbivores, but a few have evolved specialised ways of obtaining nutrition. It is believed that many ant species that engage in indirect herbivory rely on specialized symbiosis with their gut microbes to upgrade the nutritional value of the food they collect and allow them to survive in nitrogen poor regions, such as rainforest canopies. Leafcutter ants (Atta and Acromyrmex) feed exclusively on a fungus that grows only within their colonies. They continually collect leaves which are taken to the colony, cut into tiny pieces and placed in fungal gardens. Ergates specialise in related tasks according to their sizes. The largest ants cut stalks, smaller workers chew the leaves and the smallest tend the fungus. Leafcutter ants are sensitive enough to recognise the reaction of the fungus to different plant material, apparently detecting chemical signals from the fungus. If a particular type of leaf is found to be toxic to the fungus, the colony will no longer collect it. The ants feed on structures produced by the fungi called gongylidia. Symbiotic bacteria on the exterior surface of the ants produce antibiotics that kill bacteria introduced into the nest that may harm the fungi. Navigation Foraging ants travel distances of up to from their nest and scent trails allow them to find their way back even in the dark. In hot and arid regions, day-foraging ants face death by desiccation, so the ability to find the shortest route back to the nest reduces that risk. Diurnal desert ants of the genus Cataglyphis such as the Sahara desert ant navigate by keeping track of direction as well as distance travelled. Distances travelled are measured using an internal pedometer that keeps count of the steps taken and also by evaluating the movement of objects in their visual field (optical flow). Directions are measured using the position of the sun. They integrate this information to find the shortest route back to their nest. Like all ants, they can also make use of visual landmarks when available as well as olfactory and tactile cues to navigate. Some species of ant are able to use the Earth's magnetic field for navigation. The compound eyes of ants have specialised cells that detect polarised light from the Sun, which is used to determine direction. These polarization detectors are sensitive in the ultraviolet region of the light spectrum. In some army ant species, a group of foragers who become separated from the main column may sometimes turn back on themselves and form a circular ant mill. The workers may then run around continuously until they die of exhaustion. Locomotion The female worker ants do not have wings and reproductive females lose their wings after their mating flights in order to begin their colonies. Therefore, unlike their wasp ancestors, most ants travel by walking. Some species are capable of leaping. For example, Jerdon's jumping ant (Harpegnathos saltator) is able to jump by synchronising the action of its mid and hind pairs of legs. There are several species of gliding ant including Cephalotes atratus; this may be a common trait among arboreal ants with small colonies. Ants with this ability are able to control their horizontal movement so as to catch tree trunks when they fall from atop the forest canopy. Other species of ants can form chains to bridge gaps over water, underground, or through spaces in vegetation. Some species also form floating rafts that help them survive floods. These rafts may also have a role in allowing ants to colonise islands. Polyrhachis sokolova, a species of ant found in Australian mangrove swamps, can swim and live in underwater nests. Since they lack gills, they go to trapped pockets of air in the submerged nests to breathe. Cooperation and competition Not all ants have the same kind of societies. The Australian bulldog ants are among the biggest and most basal of ants. Like virtually all ants, they are eusocial, but their social behaviour is poorly developed compared to other species. Each individual hunts alone, using her large eyes instead of chemical senses to find prey. Some species attack and take over neighbouring ant colonies. Extreme specialists among these slave-raiding ants, such as the Amazon ants, are incapable of feeding themselves and need captured workers to survive. Captured workers of enslaved Temnothorax species have evolved a counter-strategy, destroying just the female pupae of the slave-making Temnothorax americanus, but sparing the males (who do not take part in slave-raiding as adults). Ants identify kin and nestmates through their scent, which comes from hydrocarbon-laced secretions that coat their exoskeletons. If an ant is separated from its original colony, it will eventually lose the colony scent. Any ant that enters a colony without a matching scent will be attacked. Parasitic ant species enter the colonies of host ants and establish themselves as social parasites; species such as Strumigenys xenos are entirely parasitic and do not have workers, but instead, rely on the food gathered by their Strumigenys perplexa hosts. This form of parasitism is seen across many ant genera, but the parasitic ant is usually a species that is closely related to its host. A variety of methods are employed to enter the nest of the host ant. A parasitic queen may enter the host nest before the first brood has hatched, establishing herself prior to development of a colony scent. Other species use pheromones to confuse the host ants or to trick them into carrying the parasitic queen into the nest. Some simply fight their way into the nest. A conflict between the sexes of a species is seen in some species of ants with these reproducers apparently competing to produce offspring that are as closely related to them as possible. The most extreme form involves the production of clonal offspring. An extreme of sexual conflict is seen in Wasmannia auropunctata, where the queens produce diploid daughters by thelytokous parthenogenesis and males produce clones by a process whereby a diploid egg loses its maternal contribution to produce haploid males who are clones of the father. Relationships with other organisms Ants form symbiotic associations with a range of species, including other ant species, other insects, plants, and fungi. They also are preyed on by many animals and even certain fungi. Some arthropod species spend part of their lives within ant nests, either preying on ants, their larvae, and eggs, consuming the food stores of the ants, or avoiding predators. These inquilines may bear a close resemblance to ants. The nature of this ant mimicry (myrmecomorphy) varies, with some cases involving Batesian mimicry, where the mimic reduces the risk of predation. Others show Wasmannian mimicry, a form of mimicry seen only in inquilines. Aphids and other hemipteran insects secrete a sweet liquid called honeydew, when they feed on plant sap. The sugars in honeydew are a high-energy food source, which many ant species collect. In some cases, the aphids secrete the honeydew in response to ants tapping them with their antennae. The ants in turn keep predators away from the aphids and will move them from one feeding location to another. When migrating to a new area, many colonies will take the aphids with them, to ensure a continued supply of honeydew. Ants also tend mealybugs to harvest their honeydew. Mealybugs may become a serious pest of pineapples if ants are present to protect mealybugs from their natural enemies. Myrmecophilous (ant-loving) caterpillars of the butterfly family Lycaenidae (e.g., blues, coppers, or hairstreaks) are herded by the ants, led to feeding areas in the daytime, and brought inside the ants' nest at night. The caterpillars have a gland which secretes honeydew when the ants massage them. The chemicals in the secretions of Narathura japonica alter the behavior of attendant Pristomyrmex punctatus workers, making them less aggressive and stationary. The relationship, formerly characterized as "mutualistic", is now considered as possibly a case of the ants being parasitically manipulated by the caterpillars. Some caterpillars produce vibrations and sounds that are perceived by the ants. A similar adaptation can be seen in Grizzled skipper butterflies that emit vibrations by expanding their wings in order to communicate with ants, which are natural predators of these butterflies. Other caterpillars have evolved from ant-loving to ant-eating: these myrmecophagous caterpillars secrete a pheromone that makes the ants act as if the caterpillar is one of their own larvae. The caterpillar is then taken into the ant nest where it feeds on the ant larvae. A number of specialized bacteria have been found as endosymbionts in ant guts. Some of the dominant bacteria belong to the order Hyphomicrobiales whose members are known for being nitrogen-fixing symbionts in legumes but the species found in ant lack the ability to fix nitrogen. Fungus-growing ants that make up the tribe Attini, including leafcutter ants, cultivate certain species of fungus in the genera Leucoagaricus or Leucocoprinus of the family Agaricaceae. In this ant-fungus mutualism, both species depend on each other for survival. The ant Allomerus decemarticulatus has evolved a three-way association with the host plant, Hirtella physophora (Chrysobalanaceae), and a sticky fungus which is used to trap their insect prey. Lemon ants make devil's gardens by killing surrounding plants with their stings and leaving a pure patch of lemon ant trees, (Duroia hirsuta). This modification of the forest provides the ants with more nesting sites inside the stems of the Duroia trees. Although some ants obtain nectar from flowers, pollination by ants is somewhat rare, one example being of the pollination of the orchid Leporella fimbriata which induces male Myrmecia urens to pseudocopulate with the flowers, transferring pollen in the process. One theory that has been proposed for the rarity of pollination is that the secretions of the metapleural gland inactivate and reduce the viability of pollen. Some plants, mostly angiosperms but also some ferns, have special nectar exuding structures, extrafloral nectaries, that provide food for ants, which in turn protect the plant from more damaging herbivorous insects. Species such as the bullhorn acacia (Acacia cornigera) in Central America have hollow thorns that house colonies of stinging ants (Pseudomyrmex ferruginea) who defend the tree against insects, browsing mammals, and epiphytic vines. Isotopic labelling studies suggest that plants also obtain nitrogen from the ants. In return, the ants obtain food from protein- and lipid-rich Beltian bodies. In Fiji Philidris nagasau (Dolichoderinae) are known to selectively grow species of epiphytic Squamellaria (Rubiaceae) which produce large domatia inside which the ant colonies nest. The ants plant the seeds and the domatia of young seedling are immediately occupied and the ant faeces in them contribute to rapid growth. Similar dispersal associations are found with other dolichoderines in the region as well. Another example of this type of ectosymbiosis comes from the Macaranga tree, which has stems adapted to house colonies of Crematogaster ants. Many plant species have seeds that are adapted for dispersal by ants. Seed dispersal by ants or myrmecochory is widespread, and new estimates suggest that nearly 9% of all plant species may have such ant associations. Often, seed-dispersing ants perform directed dispersal, depositing the seeds in locations that increase the likelihood of seed survival to reproduction. Some plants in arid, fire-prone systems are particularly dependent on ants for their survival and dispersal as the seeds are transported to safety below the ground. Many ant-dispersed seeds have special external structures, elaiosomes, that are sought after by ants as food. Ants can substantially alter rate of decomposition and nutrient cycling in their nest. By myrmecochory and modification of soil conditions they substantially alter vegetation and nutrient cycling in surrounding ecosystem. A convergence, possibly a form of mimicry, is seen in the eggs of stick insects. They have an edible elaiosome-like structure and are taken into the ant nest where the young hatch. Most ants are predatory and some prey on and obtain food from other social insects including other ants. Some species specialise in preying on termites (Megaponera and Termitopone) while a few Cerapachyinae prey on other ants. Some termites, including Nasutitermes corniger, form associations with certain ant species to keep away predatory ant species. The tropical wasp Mischocyttarus drewseni coats the pedicel of its nest with an ant-repellent chemical. It is suggested that many tropical wasps may build their nests in trees and cover them to protect themselves from ants. Other wasps, such as A. multipicta, defend against ants by blasting them off the nest with bursts of wing buzzing. Stingless bees (Trigona and Melipona) use chemical defences against ants. Flies in the Old World genus Bengalia (Calliphoridae) prey on ants and are kleptoparasites, snatching prey or brood from the mandibles of adult ants. Wingless and legless females of the Malaysian phorid fly (Vestigipoda myrmolarvoidea) live in the nests of ants of the genus Aenictus and are cared for by the ants. Fungi in the genera Cordyceps and Ophiocordyceps infect ants. Ants react to their infection by climbing up plants and sinking their mandibles into plant tissue. The fungus kills the ants, grows on their remains, and produces a fruiting body. It appears that the fungus alters the behaviour of the ant to help disperse its spores in a microhabitat that best suits the fungus. Strepsipteran parasites also manipulate their ant host to climb grass stems, to help the parasite find mates. A nematode (Myrmeconema neotropicum) that infects canopy ants (Cephalotes atratus) causes the black-coloured gasters of workers to turn red. The parasite also alters the behaviour of the ant, causing them to carry their gasters high. The conspicuous red gasters are mistaken by birds for ripe fruits, such as Hyeronima alchorneoides, and eaten. The droppings of the bird are collected by other ants and fed to their young, leading to further spread of the nematode. A study of Temnothorax nylanderi colonies in Germany found that workers parasitized by the tapeworm Anomotaenia brevis (ants are intermediate hosts, the definitive hosts are woodpeckers) lived much longer than unparasitized workers and had a reduced mortality rate, comparable to that of the queens of the same species, which live for as long as two decades. South American poison dart frogs in the genus Dendrobates feed mainly on ants, and the toxins in their skin may come from the ants. Army ants forage in a wide roving column, attacking any animals in that path that are unable to escape. In Central and South America, Eciton burchellii is the swarming ant most commonly attended by "ant-following" birds such as antbirds and woodcreepers. This behaviour was once considered mutualistic, but later studies found the birds to be parasitic. Direct kleptoparasitism (birds stealing food from the ants' grasp) is rare and has been noted in Inca doves which pick seeds at nest entrances as they are being transported by species of Pogonomyrmex. Birds that follow ants eat many prey insects and thus decrease the foraging success of ants. Birds indulge in a peculiar behaviour called anting that, as yet, is not fully understood. Here birds rest on ant nests, or pick and drop ants onto their wings and feathers; this may be a means to remove ectoparasites from the birds. Anteaters, aardvarks, pangolins, echidnas and numbats have special adaptations for living on a diet of ants. These adaptations include long, sticky tongues to capture ants and strong claws to break into ant nests. Brown bears (Ursus arctos) have been found to feed on ants. About 12%, 16%, and 4% of their faecal volume in spring, summer and autumn, respectively, is composed of ants. Relationship with humans Ants perform many ecological roles that are beneficial to humans, including the suppression of pest populations and aeration of the soil. The use of weaver ants in citrus cultivation in southern China is considered one of the oldest known applications of biological control. On the other hand, ants may become nuisances when they invade buildings or cause economic losses. In some parts of the world (mainly Africa and South America), large ants, especially army ants, are used as surgical sutures. The wound is pressed together and ants are applied along it. The ant seizes the edges of the wound in its mandibles and locks in place. The body is then cut off and the head and mandibles remain in place to close the wound. The large heads of the dinergates (soldiers) of the leafcutting ant Atta cephalotes are also used by native surgeons in closing wounds. Some ants have toxic venom and are of medical importance. The species include Paraponera clavata (tocandira) and Dinoponera spp. (false tocandiras) of South America and the Myrmecia ants of Australia. In South Africa, ants are used to help harvest the seeds of rooibos (Aspalathus linearis), a plant used to make a herbal tea. The plant disperses its seeds widely, making manual collection difficult. Black ants collect and store these and other seeds in their nest, where humans can gather them en masse. Up to half a pound (200 g) of seeds may be collected from one ant-heap. Although most ants survive attempts by humans to eradicate them, a few are highly endangered. These tend to be island species that have evolved specialized traits and risk being displaced by introduced ant species. Examples include the critically endangered Sri Lankan relict ant (Aneuretus simoni) and Adetomyrma venatrix of Madagascar. As food Ants and their larvae are eaten in different parts of the world. The eggs of two species of ants are used in Mexican escamoles. They are considered a form of insect caviar and can sell for as much as US$50 per kg going up to US$200 per kg (as of 2006) because they are seasonal and hard to find. In the Colombian department of Santander, hormigas culonas (roughly interpreted as "large-bottomed ants") Atta laevigata are toasted alive and eaten. In areas of India, and throughout Burma and Thailand, a paste of the green weaver ant (Oecophylla smaragdina) is served as a condiment with curry. Weaver ant eggs and larvae, as well as the ants, may be used in a Thai salad, yam (), in a dish called yam khai mot daeng () or red ant egg salad, a dish that comes from the Issan or north-eastern region of Thailand. Saville-Kent, in the Naturalist in Australia wrote "Beauty, in the case of the green ant, is more than skin-deep. Their attractive, almost sweetmeat-like translucency possibly invited the first essays at their consumption by the human species". Mashed up in water, after the manner of lemon squash, "these ants form a pleasant acid drink which is held in high favor by the natives of North Queensland, and is even appreciated by many European palates". Ants or their pupae are used as starters for yogurt making in parts of Bulgaria and Turkey. In his First Summer in the Sierra, John Muir notes that the Digger Indians of California ate the tickling, acid gasters of the large jet-black carpenter ants. The Mexican Indians eat the repletes, or living honey-pots, of the honey ant (Myrmecocystus). As pests Some ant species are considered as pests, primarily those that occur in human habitations, where their presence is often problematic. For example, the presence of ants would be undesirable in sterile places such as hospitals or kitchens. Some species or genera commonly categorized as pests include the Argentine ant, immigrant pavement ant, yellow crazy ant, banded sugar ant, pharaoh ant, red wood ant, black carpenter ant, odorous house ant, red imported fire ant, and European fire ant. Some ants will raid stored food, some will seek water sources, others may damage indoor structures, some may damage agricultural crops directly or by aiding sucking pests. Some will sting or bite. The adaptive nature of ant colonies make it nearly impossible to eliminate entire colonies and most pest management practices aim to control local populations and tend to be temporary solutions. Ant populations are managed by a combination of approaches that make use of chemical, biological, and physical methods. Chemical methods include the use of insecticidal bait which is gathered by ants as food and brought back to the nest where the poison is inadvertently spread to other colony members through trophallaxis. Management is based on the species and techniques may vary according to the location and circumstance. In science and technology Observed by humans since the dawn of history, the behaviour of ants has been documented and the subject of early writings and fables passed from one century to another. Those using scientific methods, myrmecologists, study ants in the laboratory and in their natural conditions. Their complex and variable social structures have made ants ideal model organisms. Ultraviolet vision was first discovered in ants by Sir John Lubbock in 1881. Studies on ants have tested hypotheses in ecology and sociobiology, and have been particularly important in examining the predictions of theories of kin selection and evolutionarily stable strategies. Ant colonies may be studied by rearing or temporarily maintaining them in formicaria, specially constructed glass framed enclosures. Individuals may be tracked for study by marking them with dots of colours. The successful techniques used by ant colonies have been studied in computer science and robotics to produce distributed and fault-tolerant systems for solving problems, for example Ant colony optimization and Ant robotics. This area of biomimetics has led to studies of ant locomotion, search engines that make use of "foraging trails", fault-tolerant storage, and networking algorithms. As pets From the late 1950s through the late 1970s, ant farms were popular educational children's toys in the United States. Some later commercial versions use transparent gel instead of soil, allowing greater visibility at the cost of stressing the ants with unnatural light. In culture Anthropomorphised ants have often been used in fables, children's stories, and religious texts to represent industriousness and cooperative effort, such as in the Aesop fable The Ant and the Grasshopper. In the Quran, Sulayman is said to have heard and understood an ant warning other ants to return home to avoid being accidentally crushed by Sulayman and his marching army., In parts of Africa, ants are considered to be the messengers of the deities. Some Native American mythology, such as the Hopi mythology, considers ants as the first animals. Ant bites are often said to have curative properties. The sting of some species of Pseudomyrmex is claimed to give fever relief. Ant bites are used in the initiation ceremonies of some Amazon Indian cultures as a test of endurance. In Greek mythology, the goddess Athena turned the maiden Myrmex into an ant when the latter claimed to have invented the plough, when in fact it was Athena's own invention. Ant society has always fascinated humans and has been written about both humorously and seriously. Mark Twain wrote about ants in his 1880 book A Tramp Abroad. Some modern authors have used the example of the ants to comment on the relationship between society and the individual. Examples are Robert Frost in his poem "Departmental" and T. H. White in his fantasy novel The Once and Future King. The plot in French entomologist and writer Bernard Werber's Les Fourmis science-fiction trilogy is divided between the worlds of ants and humans; ants and their behaviour are described using contemporary scientific knowledge. H. G. Wells wrote about intelligent ants destroying human settlements in Brazil and threatening human civilization in his 1905 science-fiction short story, The Empire of the Ants. A similar German story involving army ants, Leiningen Versus the Ants, was written in 1937 and recreated in movie form as The Naked Jungle in 1954. In more recent times, animated cartoons and 3-D animated films featuring ants have been produced including Antz, A Bug's Life, The Ant Bully, The Ant and the Aardvark, Ferdy the Ant and Atom Ant. Renowned myrmecologist E. O. Wilson wrote a short story, "Trailhead" in 2010 for The New Yorker magazine, which describes the life and death of an ant-queen and the rise and fall of her colony, from an ants' point of view. Ants also are quite popular inspiration for many science-fiction insectoids, such as the Formics of Ender's Game, the Bugs of Starship Troopers, the giant ants in the films Them! and Empire of the Ants, Marvel Comics' super hero Ant-Man, and ants mutated into super-intelligence in Phase IV. In computer strategy games, ant-based species often benefit from increased production rates due to their single-minded focus, such as the Klackons in the Master of Orion series of games or the ChCht in Deadlock II. These characters are often credited with a hive mind, a common misconception about ant colonies. In the early 1990s, the video game SimAnt, which simulated an ant colony, won the 1992 Codie award for "Best Simulation Program". See also Glossary of ant terms International Union for the Study of Social Insects Myrmecological News (journal) Task allocation and partitioning in social insects References Cited texts Further reading External links AntWeb from The California Academy of Sciences AntWiki – Bringing Ants to the World Ant Species Fact Sheets from the National Pest Management Association on Argentine, Carpenter, Pharaoh, Odorous, and other ant species Ant Genera of the World – distribution maps The super-nettles. A dermatologist's guide to ants-in-the-plants Symbiosis Extant Albian first appearances Articles containing video clips Insects in culture
Ant
[ "Biology" ]
12,256
[ "Biological interactions", "Behavior", "Symbiosis" ]
2,604
https://en.wikipedia.org/wiki/Abated
See also, Abatement. Abated, an ancient technical term applied in masonry and metal work to those portions which are sunk beneath the surface, as in inscriptions where the ground is sunk round the letters so as to leave the letters or ornament in relief. References Construction Masonry
Abated
[ "Engineering" ]
59
[ "Construction", "Masonry" ]
2,606
https://en.wikipedia.org/wiki/Abatis
An abatis, abattis, or abbattis is a field fortification consisting of an obstacle formed (in the modern era) of the branches of trees laid in a row, with the sharpened tops directed outwards, towards the enemy. The trees are usually interlaced or tied with wire. Abatis are used alone or in combination with wire entanglements and other obstacles. History Gregory of Tours mentions the use of abatises several times in his writing about the history of the early Franks. He wrote that the Franks ambushed and destroyed a Roman army near Neuss during the reign of Magnus Maximus with the use of an abatis. He also wrote that Mummolus, a general working for Burgundy, successfully used an abatis to defeat a Lombard army near Embrun. A classic use of an abatis was at the Battle of Carillon (1758) during the Seven Years' War. The 3,600 French troops defeated a massive army of 16,000 British and Colonial troops by fronting their defensive positions with an extremely dense abatis. The British found the defences almost impossible to breach and were forced to withdraw with some 2,600 casualties. Other uses of an abatis can be found at the Battle of the Chateauguay, 26 October 1813, when approximately 1,300 Canadian Voltigeurs, under the command of Charles-Michel de Salaberry, defeated an American corps of approximately 4,000 men, or at the Battle of Plattsburgh. Construction An important weakness of abatis, in contrast to barbed wire, is that it can be destroyed by fire. Also, if laced together with rope instead of wire, the rope can be very quickly destroyed by such fires, after which the abatis can be quickly pulled apart by grappling hooks thrown from a safe distance. An important advantage is that an improvised abatis can be quickly formed in forested areas. This can be done by simply cutting down a row of trees so that they fall with their tops toward the enemy. An alternative is to place explosives so as to blow the trees down. Modern use Abatis are rarely seen nowadays, having been largely replaced by wire obstacles. However, it may be used as a replacement or supplement when barbed wire is in short supply. A form of giant abatis, using whole trees instead of branches, can be used as an improvised anti-tank obstacle. Notes References External links Pamplin Historical Park & The National Museum of the Civil War Soldier includes large and authentic reproduction of abatis used in the U.S. Civil War. Video on modern anti-tank abatis by the North Atlantic Treaty Organization Fortifications by type Engineering barrages Medieval defences
Abatis
[ "Engineering" ]
540
[ "Military engineering", "Engineering barrages" ]
2,621
https://en.wikipedia.org/wiki/Aedicula
In ancient Roman religion, an aedicula (: aediculae) is a small shrine, and in classical architecture refers to a niche covered by a pediment or entablature supported by a pair of columns and typically framing a statue, the early Christian ones sometimes contained funeral urns. Aediculae are also represented in art as a form of ornamentation. The word aedicula is the diminutive of the Latin aedes, a temple building or dwelling place. The Latin word has been anglicised as "aedicule" and as "edicule". Describing post-antique architecture, especially Renaissance architecture, aedicular forms may be described using the word tabernacle, as in tabernacle window. Classical aediculae Many aediculae were household shrines (lararia) that held small altars or statues of the Lares and Di Penates. The Lares were Roman deities protecting the house and the family household gods. The Penates were originally patron gods (really genii) of the storeroom, later becoming household gods guarding the entire house. Other aediculae were small shrines within larger temples, usually set on a base, surmounted by a pediment and surrounded by columns. In ancient Roman architecture the aedicula has this representative function in the society. They are installed in public buildings like the triumphal arch, city gate, and thermae. The Library of Celsus in Ephesus ( AD) is a good example. From the 4th century Christianization of the Roman Empire onwards such shrines, or the framework enclosing them, are often called by the Biblical term tabernacle, which becomes extended to any elaborated framework for a niche, window or picture. Gothic aediculae In Gothic architecture, too, an aedicula or tabernacle is a structural framing device that gives importance to its contents, whether an inscribed plaque, a cult object, a bust or the like, by assuming the tectonic vocabulary of a little building that sets it apart from the wall against which it is placed. A tabernacle frame on a wall serves similar hieratic functions as a free-standing, three-dimensional architectural baldaquin or a ciborium over an altar. In Late Gothic settings, altarpieces and devotional images were customarily crowned with gables and canopies supported by clustered-column piers, echoing in small the architecture of Gothic churches. Painted aediculae frame figures from sacred history in initial letters of illuminated manuscripts. Renaissance aediculae Classicizing architectonic structure and décor all'antica, in the "ancient [Roman] mode", became a fashionable way to frame a painted or bas-relief portrait, or protect an expensive and precious mirror during the High Renaissance; Italian precedents were imitated in France, then in Spain, England and Germany during the later 16th century. Post-Renaissance classicism Aedicular door surrounds that are architecturally treated, with pilasters or columns flanking the doorway and an entablature even with a pediment over it came into use with the 16th century. In the neo-Palladian revival in Britain, architectonic aedicular or tabernacle frames, carved and gilded, are favourite schemes for English Palladian mirror frames of the late 1720s through the 1740s, by such designers as William Kent. Aediculae feature prominently in the arrangement of the Saint Peter's tomb with statues by Bernini; a small aedicula directly underneath it, dated ca. 160 AD, was discovered in 1940. Other aediculae Similar small shrines, called naiskoi, are found in Greek religion, but their use was strictly religious. Aediculae exist today in Roman cemeteries as a part of funeral architecture. Presently the most famous aediculae is situated inside the Church of the Holy Sepulchre in city of Jerusalem. Contemporary American architect Charles Moore (1925–1993) used the concept of aediculae in his work to create spaces within spaces and to evoke the spiritual significance of the home. See also Portico Similar, but free-standing structures: Ciborium Baldachin Monopteros Gazebo Notes References Bibliography Adkins, Lesley & Adkins, Roy A. (1996). Dictionary of Roman Religion. Facts on File, inc. . External links Conservation glossary Ancient Roman temples Architectural elements Ancient Roman architectural elements
Aedicula
[ "Technology", "Engineering" ]
908
[ "Building engineering", "Architectural elements", "Components", "Architecture" ]
2,628
https://en.wikipedia.org/wiki/Aegis
The aegis ( ; aigís), as stated in the Iliad, is a device carried by Athena and Zeus, variously interpreted as an animal skin or a shield and sometimes featuring the head of a Gorgon. There may be a connection with a deity named Aex, a daughter of Helios and a nurse of Zeus or alternatively a mistress of Zeus (Hyginus, Astronomica 2. 13). The modern concept of doing something "under someone's aegis means doing something under the protection of a powerful, knowledgeable, or benevolent source. The word aegis is identified with protection by a strong force with its roots in Greek mythology and adopted by the Romans; there are parallels in Norse mythology and in Egyptian mythology as well, where the Greek word aegis is applied by extension. Etymology The Greek aigis has many meanings, including: "violent windstorm", from the verb aïssō (word stem aïg-) = "I rush or move violently". Akin to kataigis, "thunderstorm". The shield of a deity as described above. "goatskin coat", from treating the word as meaning "something grammatically feminine pertaining to goat": Greek aix (stem aig-) = "goat" + suffix -is (stem -id-). The original meaning may have been the first, and Zeus Aigiokhos = "Zeus who holds the aegis" may have originally meant "Sky/Heaven, who holds the thunderstorm". The transition to the meaning "shield" or "goatskin" may have come by folk etymology among a people familiar with draping an animal skin over the left arm as a shield. In Greek mythology The aegis of Athena is referred to in several places in the Iliad. "It produced a sound as from myriad roaring dragons (Iliad, 4.17) and was borne by Athena in battle ... and among them went bright-eyed Athene, holding the precious aegis which is ageless and immortal: a hundred tassels of pure gold hang fluttering from it, tight-woven each of them, and each the worth of a hundred oxen." Virgil imagines the Cyclopes in Hephaestus' forge, who "busily burnished the aegis Athena wears in her angry moods—a fearsome thing with a surface of gold like scaly snake-skin, and the linked serpents and the Gorgon herself upon the goddess's breast—a severed head rolling its eyes", furnished with golden tassels and bearing the Gorgoneion (Medusa's head) in the central boss. Some of the Attic vase-painters retained an archaic tradition that the tassels had originally been serpents in their representations of the aegis. When the Olympian deities overtook the older deities of Greece and she was born of Metis (inside Zeus who had swallowed the goddess) and "re-born" through the head of Zeus fully clothed, Athena already wore her typical garments. When the Olympian shakes the aegis, Mount Ida is wrapped in clouds, the thunder rolls and men are struck down with fear. "Aegis-bearing Zeus", as he is in the Iliad, sometimes lends the fearsome aegis to Athena. In the Iliad when Zeus sends Apollo to revive the wounded Hector, Apollo, holding the aegis, charges the Achaeans, pushing them back to their ships drawn up on the shore. According to Edith Hamilton's Mythology: Timeless Tales of Gods and Heroes, the Aegis is the breastplate of Zeus, and was "awful to behold". However, Zeus is normally portrayed in classical sculpture holding a thunderbolt or lightning, bearing neither a shield nor a breastplate. In some versions, Zeus watched Athena and Triton's daughter, Pallas, compete in a friendly mock battle involving spears. Not wanting his daughter to lose, Zeus flapped his aegis to distract Pallas, whom Athena accidentally impaled. Zeus apologized to Athena by giving her the aegis; Athena then named herself Pallas Athena in tribute to her late friend. In classical poetry and art Classical Greece interpreted the Homeric aegis usually as a cover of some kind borne by Athena. It was supposed by Euripides (Ion, 995) that the aegis borne by Athena was the skin of the slain Gorgon, yet the usual understanding is that the Gorgoneion was added to the aegis, a votive offering from a grateful Perseus. In a similar interpretation, Aex, a daughter of Helios, represented as a great fire-breathing chthonic serpent similar to the Chimera, was slain and flayed by Athena, who afterwards wore its skin, the aegis, as a cuirass (Diodorus Siculus iii. 70), or as a chlamys. The Douris cup shows that the aegis was represented exactly as the skin of the great serpent, with its scales clearly delineated. John Tzetzes says that aegis was the skin of the monstrous giant Pallas whom Athena overcame and whose name she attached to her own. In a late rendering by Gaius Julius Hyginus (Poetical Astronomy ii. 13), Zeus is said to have used the skin of a pet goat owned by his nurse Amalthea (aigis "goat-skin") which suckled him in Crete, as a shield when he went forth to do battle against the Titans. The aegis appears in works of art sometimes as an animal's skin thrown over Athena's shoulders and arms, occasionally with a border of snakes, usually also bearing the Gorgon head, the gorgoneion. In some pottery it appears as a tasselled cover over Athena's dress. It is sometimes represented on the statues of Roman emperors, heroes, and warriors, and on coins, cameos and vases. A vestige of that appears in a portrait of Alexander the Great in a fresco from Pompeii dated to the first century BC, which shows the image of the head of a woman on his armor that resembles the Gorgon. Interpretations Herodotus thought he had identified the source of the aegis in ancient Libya, which was always a distant territory of ancient magic for the Greeks. "Athene's garments and aegis were borrowed by the Greeks from the Libyan women, who are dressed in exactly the same way, except that their leather garments are fringed with thongs, not serpents." Robert Graves in The Greek Myths (1955) asserts that the aegis in its Libyan sense had been a shamanic pouch containing various ritual objects, bearing the device of a monstrous serpent-haired visage with tusk-like teeth and a protruding tongue which was meant to frighten away the uninitiated. In this context, Graves identifies the aegis as clearly belonging first to Athena. One current interpretation is that the Hittite sacral hieratic hunting bag (kursas), a rough and shaggy goatskin that has been firmly established in literary texts and iconography by H.G. Güterbock, was a source of the aegis. References External links Theoi Project: "Aigis" Die Aigis: Zu Typologie und Ikonographie eines Mythischen Gegenstandes: a Doctoral dissertation on the Ægis (Westfälischen Wilhelms-Universität, Münster 1991) by Sigrid Vierck. Comparative mythology Objects in Greek mythology Greek shields Interpersonal relationships Medusa Mythography Mythological clothing Mythological shields Symbols of Athena
Aegis
[ "Biology" ]
1,573
[ "Behavior", "Interpersonal relationships", "Human behavior" ]
2,635
https://en.wikipedia.org/wiki/Agarose
Agarose is a heteropolysaccharide, generally extracted from certain red algae. It is a linear polymer made up of the repeating unit of agarobiose, which is a disaccharide made up of D-galactose and 3,6-anhydro-L-galactopyranose. Agarose is one of the two principal components of agar, and is purified from agar by removing agar's other component, agaropectin. Agarose is frequently used in molecular biology for the separation of large molecules, especially DNA, by electrophoresis. Slabs of agarose gels (usually 0.7 - 2%) for electrophoresis are readily prepared by pouring the warm, liquid solution into a mold. A wide range of different agaroses of varying molecular weights and properties are commercially available for this purpose. Agarose may also be formed into beads and used in a number of chromatographic methods for protein purification. Structure Agarose is a linear polymer with a molecular weight of about 120,000, consisting of alternating D-galactose and 3,6-anhydro-L-galactopyranose linked by α-(1→3) and β-(1→4) glycosidic bonds. The 3,6-anhydro-L-galactopyranose is an L-galactose with an anhydro bridge between the 3 and 6 positions, although some L-galactose units in the polymer may not contain the bridge. Some D-galactose and L-galactose units can be methylated, and pyruvate and sulfate are also found in small quantities. Each agarose chain contains ~800 molecules of galactose, and the agarose polymer chains form helical fibers that aggregate into supercoiled structure with a radius of 20-30 nanometer (nm). The fibers are quasi-rigid, and have a wide range of length depending on the agarose concentration. When solidified, the fibers form a three-dimensional mesh of channels of diameter ranging from 50 nm to >200 nm depending on the concentration of agarose used - higher concentrations yield lower average pore diameters. The 3-D structure is held together with hydrogen bonds and can therefore be disrupted by heating back to a liquid state. Properties Agarose is available as a white powder which dissolves in near-boiling water, and forms a gel when it cools. Agarose exhibits the phenomenon of thermal hysteresis in its liquid-to-gel transition, i.e. it gels and melts at different temperatures. The gelling and melting temperatures vary depending on the type of agarose. Standard agaroses derived from Gelidium has a gelling temperature of and a melting temperature of , while those derived from Gracilaria, due to its higher methoxy substituents, has a gelling temperature of and melting temperature of . The melting and gelling temperatures may be dependent on the concentration of the gel, particularly at low gel concentration of less than 1%. The gelling and melting temperatures are therefore given at a specified agarose concentration. Natural agarose contains uncharged methyl groups and the extent of methylation is directly proportional to the gelling temperature. Synthetic methylation however have the reverse effect, whereby increased methylation lowers the gelling temperature. A variety of chemically modified agaroses with different melting and gelling temperatures are available through chemical modifications. The agarose in the gel forms a meshwork that contains pores, and the size of the pores depends on the concentration of agarose added. On standing, the agarose gels are prone to syneresis (extrusion of water through the gel surface), but the process is slow enough to not interfere with the use of the gel. Agarose gel can have high gel strength at low concentration, making it suitable as an anti-convection medium for gel electrophoresis. Agarose gels as dilute as 0.15% can form slabs for gel electrophoresis. The agarose polymer contains charged groups, in particular pyruvate and sulfate. These negatively charged groups can slow down the movement of DNA molecules in a process called electroendosmosis (EEO). Low EEO (LE) agarose is therefore generally preferred for use in agarose gel electrophoresis of nucleic acids. Zero EEO agaroses are also available but these may be undesirable for some applications as they may be made by adding positively charged groups that can affect subsequent enzyme reactions. Electroendosmosis is a reason agarose is used preferentially over agar as agaropectin in agar contains a significant amount of negatively charged sulphate and carboxyl groups. The removal of agaropectin in agarose substantially reduces the EEO, as well as reducing the non-specific adsorption of biomolecules to the gel matrix. However, for some applications such as the electrophoresis of serum protein, a high EEO may be desirable, and agaropectin may be added in the gel used. LE agarose is said to be better for preparative electrophoresis, i.e. when DNA needs to be extracted from an agarose gel. Low melting and gelling temperature agaroses The melting and gelling temperatures of agarose can be modified by chemical modifications, most commonly by hydroxyethylation, which reduces the number of intrastrand hydrogen bonds, resulting in lower melting and setting temperatures compared to standard agaroses. The exact temperature is determined by the degree of substitution, and many available low-melting-point (LMP) agaroses can remain fluid at range. This property allows enzymatic manipulations to be carried out directly after the DNA gel electrophoresis by adding slices of melted gel containing DNA fragment of interest to a reaction mixture. The LMP agarose contains fewer of the sulphates that can affect some enzymatic reactions, and is therefore preferably used for some applications. Hydroxyethylated agarose also has a smaller pore size (~90 nm) than standard agaroses. Hydroxyethylation may reduce the pore size by reducing the packing density of the agarose bundles, therefore LMP gel can also have an effect on the time and separation during electrophoresis. Ultra-low melting or gelling temperature agaroses may gel only at . Applications Agarose is a preferred matrix for work with proteins and nucleic acids as it has a broad range of physical, chemical and thermal stability, and its lower degree of chemical complexity also makes it less likely to interact with biomolecules. Agarose is most commonly used as the medium for analytical scale electrophoretic separation in agarose gel electrophoresis. Gels made from purified agarose have a relatively large pore size, making them useful for separation of large molecules, such as proteins and protein complexes >200 kilodaltons, as well as DNA fragments >100 basepairs. Agarose is also used widely for a number of other applications, for example immunodiffusion and immunoelectrophoresis, as the agarose fibers can function as anchor for immunocomplexes. Agarose gel electrophoresis Agarose gel electrophoresis is the routine method for resolving DNA in the laboratory. Agarose gels have lower resolving power for DNA than acrylamide gels, but they have greater range of separation, and are therefore usually used for DNA fragments with lengths of 50–20,000 bp (base pairs), although resolution of over 6 Mb is possible with pulsed field gel electrophoresis (PFGE). It can also be used to separate large protein molecules, and it is the preferred matrix for the gel electrophoresis of particles with effective radii larger than 5-10 nm. The pore size of the gel affects the size of the DNA that can be sieved. The lower the concentration of the gel, the larger the pore size, and the larger the DNA that can be sieved. However low-concentration gels (0.1 - 0.2%) are fragile and therefore hard to handle, and the electrophoresis of large DNA molecules can take several days. The limit of resolution for standard agarose gel electrophoresis is around 750 kb. This limit can be overcome by PFGE, where alternating orthogonal electric fields are applied to the gel. The DNA fragments reorientate themselves when the applied field switches direction, but larger molecules of DNA take longer to realign themselves when the electric field is altered, while for smaller ones it is quicker, and the DNA can therefore be fractionated according to size. Agarose gels are cast in a mold, and when set, usually run horizontally submerged in a buffer solution. Tris-acetate-EDTA and Tris-Borate-EDTA buffers are commonly used, but other buffers such as Tris-phosphate, barbituric acid-sodium barbiturate or Tris-barbiturate buffers may be used in other applications. The DNA is normally visualized by staining with ethidium bromide and then viewed under a UV light, but other methods of staining are available, such as SYBR Green, GelRed, methylene blue, and crystal violet. If the separated DNA fragments are needed for further downstream experiment, they can be cut out from the gel in slices for further manipulation. Protein purification Agarose gel matrix is often used for protein purification, for example, in column-based preparative scale separation as in gel filtration chromatography, affinity chromatography and ion exchange chromatography. It is however not used as a continuous gel, rather it is formed into porous beads or resins of varying fineness. The beads are highly porous so that protein may flow freely through the beads. These agarose-based beads are generally soft and easily crushed, so they should be used under gravity-flow, low-speed centrifugation, or low-pressure procedures. The strength of the resins can be improved by increased cross-linking and chemical hardening of the agarose resins, however such changes may also result in a lower binding capacity for protein in some separation procedures such as affinity chromatography. Agarose is a useful material for chromatography because it does not absorb biomolecules to any significant extent, has good flow properties, and can tolerate extremes of pH and ionic strength as well as high concentration of denaturants such as 8M urea or 6M guanidine HCl. Examples of agarose-based matrix for gel filtration chromatography are Sepharose and WorkBeads 40 SEC (cross-linked beaded agarose), Praesto and Superose (highly cross-linked beaded agaroses), and Superdex (dextran covalently linked to agarose). For affinity chromatography, beaded agarose is the most commonly used matrix resin for the attachment of the ligands that bind protein. The ligands are linked covalently through a spacer to activated hydroxyl groups of agarose bead polymer. Proteins of interest can then be selectively bound to the ligands to separate them from other proteins, after which it can be eluted. The agarose beads used are typically of 4% and 6% densities with a high binding capacity for protein. Solid culture media Agarose plate may sometimes be used instead of agar for culturing organisms as agar may contain impurities that can affect the growth of the organism or some downstream procedures such as polymerase chain reaction (PCR). Agarose is also harder than agar and may therefore be preferable where greater gel strength is necessary, and its lower gelling temperature may prevent causing thermal shock to the organism when the cells are suspended in liquid before gelling. It may be used for the culture of strict autotrophic bacteria, plant protoplast, Caenorhabditis elegans, other organisms and various cell lines. Motility assays Agarose is sometimes used instead of agar to measure microorganism motility and mobility. Motile species will be able to migrate, albeit slowly, throughout the porous gel and infiltration rates can then be visualized. The gel's porosity is directly related to the concentration of agar or agarose in the medium, so different concentration gels may be used to assess a cell's swimming, swarming, gliding and twitching motility. Under-agarose cell migration assay may be used to measure chemotaxis and chemokinesis. A layer of agarose gel is placed between a cell population and a chemoattractant. As a concentration gradient develops from the diffusion of the chemoattractant into the gel, various cell populations requiring different stimulation levels to migrate can then be visualized over time using microphotography as they tunnel upward through the gel against gravity along the gradient. See also Agar SDD-AGE References Polysaccharides
Agarose
[ "Chemistry" ]
2,812
[ "Carbohydrates", "Polysaccharides" ]
2,637
https://en.wikipedia.org/wiki/Atomic%20absorption%20spectroscopy
Atomic absorption spectroscopy (AAS) is a spectroanalytical procedure for the quantitative measurement of chemical elements. AAS is based on the absorption of light by free metallic ions that have been atomized from a sample. An alternative technique is atomic emission spectroscopy (AES). In analytical chemistry the technique is used for determining the concentration of a particular element (the analyte) in a sample to be analyzed. AAS can be used to determine over 70 different elements in solution, or directly in solid samples via electrothermal vaporization, and is used in pharmacology, biophysics, archaeology and toxicology research. Atomic emission spectroscopy (AAS) was first used as an analytical technique, and the underlying principles were established in the second half of the 19th century by Robert Wilhelm Bunsen and Gustav Robert Kirchhoff, both professors at the University of Heidelberg, Germany. The modern form of AAS was largely developed during the 1950s by a team of Australian chemists. They were led by Sir Alan Walsh at the Commonwealth Scientific and Industrial Research Organisation (CSIRO), Division of Chemical Physics, in Melbourne, Australia. Atomic absorption spectrometry has many uses in different areas of chemistry such as clinical analysis of metals in biological fluids and tissues such as whole blood, plasma, urine, saliva, brain tissue, liver, hair, muscle tissue. Atomic absorption spectrometry can be used in qualitative and quantitative analysis. Principles The technique makes use of the atomic absorption spectrum of a sample in order to assess the concentration of specific analytes within it. It requires standards with known analyte content to establish the relation between the measured absorbance and the analyte concentration and relies therefore on the Beer–Lambert law. Analyzing Samples with Atomic Absorption Spectroscopy (AAS) Atomic Absorption Spectroscopy (AAS) measures the concentration of specific elements in a sample by analyzing their unique "fingerprint" in the form of an atomic absorption spectrum. Here's how it works: Step 1: Sample Preparation:** The sample is typically dissolved in a suitable solvent (acids, water) to create a liquid solution. This ensures the analytes are present as free atoms, ready for absorption. For solid samples like ores or minerals, additional steps like grinding and digestion may be required to break down the matrix and liberate the analytes. Step 2: Atomization:** The prepared solution is nebulized into a fine mist and introduced into a high-temperature flame (air-acetylene or nitrous oxide-acetylene mix). The intense heat in the flame excites the electrons in the analyte atoms, promoting them to higher energy levels. Step 3: Absorption:** Simultaneously, a hollow cathode lamp containing the same element as the analyte emits a specific wavelength of light that corresponds to the energy difference between the excited and ground state of the analyte atoms. As the emitted light passes through the atomized sample, some photons are absorbed by the excited analyte atoms, causing them to return to their ground state. This absorption decreases the intensity of the light at the specific wavelength. Step 4: Measurement and Analysis:** The light intensity before and after passing through the sample is measured by a detector. The difference in intensity is directly proportional to the concentration of the analyte in the sample, following the Beer-Lambert law: * **A = εcl**, where: * A is the absorbance measured. * ε is the molar absorptivity (constant specific to the element and wavelength). * c is the concentration of the analyte. * l is the path length of the light through the sample. Step 5: Calibration and Quantification:** To determine the actual concentration of the analyte, the instrument is calibrated using standard solutions containing known concentrations of the element. By comparing the measured absorbance of the sample to the calibration curve, the concentration of the analyte in the original sample can be calculated. Feedback Mechanism:** The measured absorbance directly provides feedback on the concentration of the analyte in the sample. This feedback loop allows the AAS to analyze various samples efficiently and determine their elemental composition with high accuracy. In summary, AAS utilizes the unique absorption properties of elements to accurately quantify their concentration in samples. By preparing the sample, atomizing the analytes, measuring their absorption of specific light, and applying the Beer-Lambert law, this powerful technique helps us understand the elemental makeup of diverse materials across various scientific and industrial fields. Instrumentation In order to analyze a sample for its atomic constituents, it has to be atomized. The atomizers most commonly used nowadays are flames and electrothermal (graphite tube) atomizers. The atoms should then be irradiated by optical radiation, and the radiation source could be an element-specific line radiation source or a continuum radiation source. The radiation then passes through a monochromator in order to separate the element-specific radiation from any other radiation emitted by the radiation source, which is finally measured by a detector. Atomizers The used nowadays are spectroscopic flames and electrothermal atomizers. Other atomizers, such as glow-discharge atomization, hydride atomization, or cold-vapor atomization, might be used for special purposes. Flame atomizers The oldest and most commonly used atomizers in AAS are flames, principally the air-acetylene flame with a temperature of about 2300 °C and the nitrous oxide system (N2O)-acetylene flame with a temperature of about 2700 °C. The latter flame, in addition, offers a more reducing environment, being ideally suited for analytes with high affinity to oxygen. Liquid or dissolved samples are typically used with flame atomizers. The sample solution is aspirated by a pneumatic analytical nebulizer, transformed into an aerosol, which is introduced into a spray chamber, where it is mixed with the flame gases and conditioned in a way that only the finest aerosol droplets (< 10 μm) enter the flame. This conditioning process reduces interference, but only about 5% of the aerosolized solution reaches the flame because of it. On top of the spray chamber is a burner head that produces a flame that is laterally long (usually 5–10 cm) and only a few mm deep. The radiation beam passes through this flame at its longest axis, and the flame gas flow-rates may be adjusted to produce the highest concentration of free atoms. The burner height may also be adjusted, so that the radiation beam passes through the zone of highest atom cloud density in the flame, resulting in the highest sensitivity. The processes in a flame include the stages of desolvation (drying) in which the solvent is evaporated and the dry sample nano-particles remain, vaporization (transfer to the gaseous phase) in which the solid particles are converted into gaseous molecule, atomization in which the molecules are dissociated into free atoms, and ionization where (depending on the ionization potential of the analyte atoms and the energy available in a particular flame) atoms may be in part converted to gaseous ions. Each of these stages includes the risk of interference in case the degree of phase transfer is different for the analyte in the calibration standard and in the sample. Ionization is generally undesirable, as it reduces the number of atoms that are available for measurement, i.e., the sensitivity. In flame AAS a steady-state signal is generated during the time period when the sample is aspirated. This technique is typically used for determinations in the mg L−1 range, and may be extended down to a few μg L−1 for some elements. Electrothermal atomizers Electrothermal AAS (ET AAS) using graphite tube atomizers was pioneered by Boris V. L’vov at the Saint Petersburg Polytechnical Institute, Russia, since the late 1950s, and investigated in parallel by Hans Massmann at the Institute of Spectrochemistry and Applied Spectroscopy (ISAS) in Dortmund, Germany. Although a wide variety of graphite tube designs have been used over the years, the dimensions nowadays are typically 20–25 mm in length and 5–6 mm inner diameter. With this technique liquid/dissolved, solid and gaseous samples may be analyzed directly. A measured volume (typically 10–50 μL) or a weighed mass (typically around 1 mg) of a solid sample are introduced into the graphite tube and subject to a temperature program. This typically consists of stages, such as drying – the solvent is evaporated; pyrolysis – the majority of the matrix constituents are removed; atomization – the analyte element is released to the gaseous phase; and cleaning – eventual residues in the graphite tube are removed at high temperature. The graphite tubes are heated via their ohmic resistance using a low-voltage high-current power supply; the temperature in the individual stages can be controlled very closely, and temperature ramps between the individual stages facilitate separation of sample components. Tubes may be heated transversely or longitudinally, where the former ones have the advantage of a more homogeneous temperature distribution over their length. The so-called stabilized temperature platform furnace (STPF) concept, proposed by Walter Slavin, based on research of Boris L’vov, makes ET AAS essentially free from interference. The major components of this concept are atomization of the sample from a graphite platform inserted into the graphite tube (L’vov platform) instead of from the tube wall in order to delay atomization until the gas phase in the atomizer has reached a stable temperature; use of a chemical modifier in order to stabilize the analyte to a pyrolysis temperature that is sufficient to remove the majority of the matrix components; and integration of the absorbance over the time of the transient absorption signal instead of using peak height absorbance for quantification. In ET AAS a transient signal is generated, the area of which is directly proportional to the mass of analyte (not its concentration) introduced into the graphite tube. This technique has the advantage that any kind of sample, solid, liquid or gaseous, can be analyzed directly. Its sensitivity is 2–3 orders of magnitude higher than that of flame AAS, so that determinations in the low μg L−1 range (for a typical sample volume of 20 μL) and ng g−1 range (for a typical sample mass of 1 mg) can be carried out. It shows a very high degree of freedom from interferences, so that ET AAS might be considered the most robust technique available nowadays for the determination of trace elements in complex matrices. Specialized atomization techniques While flame and electrothermal vaporizers are the most common atomization techniques, several other atomization methods are utilized for specialized use. Glow-discharge atomization A glow-discharge device (GD) serves as a versatile source, as it can simultaneously introduce and atomize the sample. The glow discharge occurs in a low-pressure argon gas atmosphere between 1 and 10 torr. In this atmosphere lies a pair of electrodes applying a DC voltage of 250 to 1000 V to break down the argon gas into positively charged ions and electrons. These ions, under the influence of the electric field, are accelerated into the cathode surface containing the sample, bombarding the sample and causing neutral sample atom ejection through the process known as sputtering. The atomic vapor produced by this discharge is composed of ions, ground state atoms, and fraction of excited atoms. When the excited atoms relax back into their ground state, a low-intensity glow is emitted, giving the technique its name. The requirement for samples of glow discharge atomizers is that they are electrical conductors. Consequently, atomizers are most commonly used in the analysis of metals and other conducting samples. However, with proper modifications, it can be utilized to analyze liquid samples as well as nonconducting materials by mixing them with a conductor (e.g. graphite). Hydride atomization Hydride generation techniques are specialized in solutions of specific elements. The technique provides a means of introducing samples containing arsenic, antimony, selenium, bismuth, and lead into an atomizer in the gas phase. With these elements, hydride atomization enhances detection limits by a factor of 10 to 100 compared to alternative methods. Hydride generation occurs by adding an acidified aqueous solution of the sample to a 1% aqueous solution of sodium borohydride, all of which is contained in a glass vessel. The volatile hydride generated by the reaction that occurs is swept into the atomization chamber by an inert gas, where it undergoes decomposition. This process forms an atomized form of the analyte, which can then be measured by absorption or emission spectrometry. Cold-vapor atomization The cold-vapor technique is an atomization method limited only for the determination of mercury, due to it being the only metallic element to have a large vapor pressure at ambient temperature. Because of this, it has an important use in determining organic mercury compounds in samples and their distribution in the environment. The method initiates by converting mercury into Hg2+ by oxidation from nitric and sulfuric acids, followed by a reduction of Hg2+ with tin(II) chloride. The mercury, is then swept into a long-pass absorption tube by bubbling a stream of inert gas through the reaction mixture. The concentration is determined by measuring the absorbance of this gas at 253.7 nm. Detection limits for this technique are in the parts-per-billion range making it an excellent mercury detection atomization method. Radiation sources We have to distinguish between line source AAS (LS AAS) and continuum source AAS (CS AAS). In classical LS AAS, as it has been proposed by Alan Walsh, the high spectral resolution required for AAS measurements is provided by the radiation source itself that emits the spectrum of the analyte in the form of lines that are narrower than the absorption lines. Continuum sources, such as deuterium lamps, are only used for background correction purposes. The advantage of this technique is that only a medium-resolution monochromator is necessary for measuring AAS; however, it has the disadvantage that usually a separate lamp is required for each element that has to be determined. In CS AAS, in contrast, a single lamp, emitting a continuum spectrum over the entire spectral range of interest is used for all elements. Obviously, a high-resolution monochromator is required for this technique, as will be discussed later. Hollow cathode lamps Hollow cathode lamps (HCL) are the most common radiation source in LS AAS. Inside the sealed lamp, filled with argon or neon gas at low pressure, is a cylindrical metal cathode containing the element of interest and an anode. A high voltage is applied across the anode and cathode, resulting in an ionization of the fill gas. The gas ions are accelerated towards the cathode and, upon impact on the cathode, sputter cathode material that is excited in the glow discharge to emit the radiation of the sputtered material, i.e., the element of interest. In the majority of cases single element lamps are used, where the cathode is pressed out of predominantly compounds of the target element. Multi-element lamps are available with combinations of compounds of the target elements pressed in the cathode. Multi element lamps produce slightly less sensitivity than single element lamps and the combinations of elements have to be selected carefully to avoid spectral interferences. Most multi-element lamps combine a handful of elements, e.g.: 2 - 8. Atomic Absorption Spectrometers can feature as few as 1-2 hollow cathode lamp positions or in automated multi-element spectrometers, a 8-12 lamp positions may be typically available. Electrodeless discharge lamps Electrodeless discharge lamps (EDL) contain a small quantity of the analyte as a metal or a salt in a quartz bulb together with an inert gas, typically argon gas, at low pressure. The bulb is inserted into a coil that is generating an electromagnetic radio frequency field, resulting in a low-pressure inductively coupled discharge in the lamp. The emission from an EDL is higher than that from an HCL, and the line width is generally narrower, but EDLs need a separate power supply and might need a longer time to stabilize. Deuterium lamps Deuterium HCL or even hydrogen HCL and deuterium discharge lamps are used in LS AAS for background correction purposes. The radiation intensity emitted by these lamps decreases significantly with increasing wavelength, so that they can be only used in the wavelength range between 190 and about 320 nm. Continuum sources When a continuum radiation source is used for AAS, it is necessary to use a high-resolution monochromator, as will be discussed later. In addition, it is necessary that the lamp emits radiation of intensity at least an order of magnitude above that of a typical HCL over the entire wavelength range from 190 nm to 900 nm. A special high-pressure xenon short arc lamp, operating in a hot-spot mode has been developed to fulfill these requirements. Spectrometer As already pointed out above, there is a difference between medium-resolution spectrometers that are used for LS AAS and high-resolution spectrometers that are designed for CS AAS. The spectrometer includes the spectral sorting device (monochromator) and the detector. Spectrometers for LS AAS In LS AAS the high resolution that is required for the measurement of atomic absorption is provided by the narrow line emission of the radiation source, and the monochromator simply has to resolve the analytical line from other radiation emitted by the lamp. This can usually be accomplished with a band pass between 0.2 and 2 nm, i.e., a medium-resolution monochromator. Another feature to make LS AAS element-specific is modulation of the primary radiation and the use of a selective amplifier that is tuned to the same modulation frequency, as already postulated by Alan Walsh. This way any (unmodulated) radiation emitted for example by the atomizer can be excluded, which is imperative for LS AAS. Simple monochromators of the Littrow or (better) the Czerny-Turner design are typically used for LS AAS. Photomultiplier tubes are the most frequently used detectors in LS AAS, although solid state detectors might be preferred because of their better signal-to-noise ratio. Spectrometers for CS AAS When a continuum radiation source is used for AAS measurement it is indispensable to work with a high-resolution monochromator. The resolution has to be equal to or better than the half-width of an atomic absorption line (about 2 pm) in order to avoid losses of sensitivity and linearity of the calibration graph. The research with high-resolution (HR) CS AAS was pioneered by the groups of O’Haver and Harnly in the US, who also developed the (up until now) only simultaneous multi-element spectrometer for this technique. The breakthrough, however, came when the group of Becker-Ross in Berlin, Germany, built a spectrometer entirely designed for HR-CS AAS. The first commercial equipment for HR-CS AAS was introduced by Analytik Jena (Jena, Germany) at the beginning of the 21st century, based on the design proposed by Becker-Ross and Florek. These spectrometers use a compact double monochromator with a prism pre-monochromator and an echelle grating monochromator for high resolution. A linear charge-coupled device (CCD) array with 200 pixels is used as the detector. The second monochromator does not have an exit slit; hence the spectral environment at both sides of the analytical line becomes visible at high resolution. As typically only 3–5 pixels are used to measure the atomic absorption, the other pixels are available for correction purposes. One of these corrections is that for lamp flicker noise, which is independent of wavelength, resulting in measurements with very low noise level; other corrections are those for background absorption, as will be discussed later. Background absorption and background correction The relatively small number of atomic absorption lines (compared to atomic emission lines) and their narrow width (a few pm) make spectral overlap rare; there are only few examples known that an absorption line from one element will overlap with another. Molecular absorption, in contrast, is much broader, so that it is more likely that some molecular absorption band will overlap with an atomic line. This kind of absorption might be caused by un-dissociated molecules of concomitant elements of the sample or by flame gases. We have to distinguish between the spectra of di-atomic molecules, which exhibit a pronounced fine structure, and those of larger (usually tri-atomic) molecules that don't show such fine structure. Another source of background absorption, particularly in ET AAS, is scattering of the primary radiation at particles that are generated in the atomization stage, when the matrix could not be removed sufficiently in the pyrolysis stage. All these phenomena, molecular absorption and radiation scattering, can result in artificially high absorption and an improperly high (erroneous) calculation for the concentration or mass of the analyte in the sample. There are several techniques available to correct for background absorption, and they are significantly different for LS AAS and HR-CS AAS. Background correction techniques in LS AAS In LS AAS background absorption can only be corrected using instrumental techniques, and all of them are based on two sequential measurements: firstly, total absorption (atomic plus background), secondly, background absorption only. The difference of the two measurements gives the net atomic absorption. Because of this, and because of the use of additional devices in the spectrometer, the signal-to-noise ratio of background-corrected signals is always significantly inferior compared to uncorrected signals. It should also be pointed out that in LS AAS there is no way to correct for (the rare case of) a direct overlap of two atomic lines. In essence there are three techniques used for background correction in LS AAS: Deuterium background correction This is the oldest and still most commonly used technique, particularly for flame AAS. In this case, a separate source (a deuterium lamp) with broad emission is used to measure the background absorption over the entire width of the exit slit of the spectrometer. The use of a separate lamp makes this technique the least accurate one, as it cannot correct for any structured background. It also cannot be used at wavelengths above about 320 nm, as the emission intensity of the deuterium lamp becomes very weak. The use of deuterium HCL is preferable compared to an arc lamp due to the better fit of the image of the former lamp with that of the analyte HCL. Smith-Hieftje background correction This technique (named after their inventors) is based on the line-broadening and self-reversal of emission lines from HCL when high current is applied. Total absorption is measured with normal lamp current, i.e., with a narrow emission line, and background absorption after application of a high-current pulse with the profile of the self-reversed line, which has little emission at the original wavelength, but strong emission on both sides of the analytical line. The advantage of this technique is that only one radiation source is used; among the disadvantages are that the high-current pulses reduce lamp lifetime, and that the technique can only be used for relatively volatile elements, as only those exhibit sufficient self-reversal to avoid dramatic loss of sensitivity. Another problem is that background is not measured at the same wavelength as total absorption, making the technique unsuitable for correcting structured background. Zeeman-effect background correction An alternating magnetic field is applied at the atomizer (graphite furnace) to split the absorption line into three components, the π component, which remains at the same position as the original absorption line, and two σ components, which are moved to higher and lower wavelengths, respectively. Total absorption is measured without magnetic field and background absorption with the magnetic field on. The π component has to be removed in this case, e.g. using a polarizer, and the σ components do not overlap with the emission profile of the lamp, so that only the background absorption is measured. The advantages of this technique are that total and background absorption are measured with the same emission profile of the same lamp, so that any kind of background, including background with fine structure can be corrected accurately, unless the molecule responsible for the background is also affected by the magnetic field and using a chopper as a polariser reduces the signal to noise ratio. While the disadvantages are the increased complexity of the spectrometer and power supply needed for running the powerful magnet needed to split the absorption line. Background correction techniques in HR-CS AAS In HR-CS AAS background correction is carried out mathematically in the software using information from detector pixels that are not used for measuring atomic absorption; hence, in contrast to LS AAS, no additional components are required for background correction. Background correction using correction pixels It has already been mentioned that in HR-CS AAS lamp flicker noise is eliminated using correction pixels. In fact, any increase or decrease in radiation intensity that is observed to the same extent at all pixels chosen for correction is eliminated by the correction algorithm. This obviously also includes a reduction of the measured intensity due to radiation scattering or molecular absorption, which is corrected in the same way. As measurement of total and background absorption, and correction for the latter, are strictly simultaneous (in contrast to LS AAS), even the fastest changes of background absorption, as they may be observed in ET AAS, do not cause any problem. In addition, as the same algorithm is used for background correction and elimination of lamp noise, the background corrected signals show a much better signal-to-noise ratio compared to the uncorrected signals, which is also in contrast to LS AAS. Background correction using a least-squares algorithm The above technique can obviously not correct for a background with fine structure, as in this case the absorbance will be different at each of the correction pixels. In this case HR-CS AAS is offering the possibility to measure correction spectra of the molecule(s) that is (are) responsible for the background and store them in the computer. These spectra are then multiplied with a factor to match the intensity of the sample spectrum and subtracted pixel by pixel and spectrum by spectrum from the sample spectrum using a least-squares algorithm. This might sound complex, but first of all the number of di-atomic molecules that can exist at the temperatures of the atomizers used in AAS is relatively small, and second, the correction is performed by the computer within a few seconds. The same algorithm can actually also be used to correct for direct line overlap of two atomic absorption lines, making HR-CS AAS the only AAS technique that can correct for this kind of spectral interference. See also Absorption spectroscopy Beer–Lambert law Inductively coupled plasma mass spectrometry Laser absorption spectrometry References Further reading B. Welz, M. Sperling (1999), Atomic Absorption Spectrometry, Wiley-VCH, Weinheim, Germany, . A. Walsh (1955), The application of atomic absorption spectra to chemical analysis, Spectrochim. Acta 7: 108–117. J.A.C. Broekaert (1998), Analytical Atomic Spectrometry with Flames and Plasmas, 3rd Edition, Wiley-VCH, Weinheim, Germany. B.V. L’vov (1984), Twenty-five years of furnace atomic absorption spectroscopy, Spectrochim. Acta Part B, 39: 149–157. B.V. L’vov (2005), Fifty years of atomic absorption spectrometry; J. Anal. Chem., 60: 382–392. H. Massmann (1968), Vergleich von Atomabsorption und Atomfluoreszenz in der Graphitküvette, Spectrochim. Acta Part B, 23: 215–226. W. Slavin, D.C. Manning, G.R. Carnrick (1981), The stabilized temperature platform furnace, At. Spectrosc. 2: 137–145. B. Welz, H. Becker-Ross, S. Florek, U. Heitmann (2005), High-resolution Continuum Source AAS, Wiley-VCH, Weinheim, Germany, . H. Becker-Ross, S. Florek, U. Heitmann, R. Weisse (1996), Influence of the spectral bandwidth of the spectrometer on the sensitivity using continuum source AAS, Fresenius J. Anal. Chem. 355: 300–303. J.M. Harnly (1986), Multi element atomic absorption with a continuum source, Anal. Chem. 58: 933A-943A. Skoog, Douglas (2007). Principles of Instrumental Analysis (6th ed.). Canada: Thomson Brooks/Cole. . External links Absorption spectroscopy Australian inventions Scientific techniques Analytical chemistry
Atomic absorption spectroscopy
[ "Physics", "Chemistry" ]
6,114
[ "nan", "Spectroscopy", "Spectrum (physical sciences)", "Absorption spectroscopy" ]
2,661
https://en.wikipedia.org/wiki/Affection
Affection or fondness is a "disposition or state of mind or body" commonly linked to a feeling or type of love. It has led to multiple branches in philosophy and psychology that discuss emotion, disease, influence, and state of being. Often, "affection" denotes more than mere goodwill or friendship. Writers on ethics generally use the word to refer to distinct states of feeling, both lasting and temporary. Some contrast it with passion as being free from the distinctively sensual element. Affection can elicit diverse emotional reactions such as embarrassment, disgust, pleasure, and annoyance. The emotional and physical effect of affection also varies between the giver and the receiver. Restricted definition Sometimes the term is restricted to emotional states directed towards living entities, including humans and animals. Affection is often compared with passion, stemming from the Greek word . Consequently, references to affection are found in the works of philosophers such as René Descartes, Baruch Spinoza, and early British ethicists. Despite these associations, it is commonly differentiated from passion on various grounds. Some definitions of affection exclude feelings of anxiety or heightened excitement, elements typically linked to passion. In this narrower context, the term holds significance in ethical frameworks, particularly concerning social or parental affections, forming a facet of moral duties and virtue. Ethical perspectives may hinge on whether affection is perceived as voluntary. Expression Affection can be communicated by looks, words, gestures, or touches. It conveys love and social connection. The five love languages explains how couples can communicate affections to each other. Affectionate behavior may have evolved from parental nurturing behavior due to its associations with hormonal rewards. Such affection has been shown to influence brain development in infants, especially their biochemical systems and prefrontal development. Affectionate gestures can become undesirable if they insinuate potential harm to one's welfare. However, when welcomed, such behavior can offer several health benefits. Some theories suggest that positive sentiments enhance individuals' inclination to engage socially, and the sense of closeness fostered by affection contributes to nurturing positive sentiments among them. Benefits of affection Affection exchange is an adaptive human behavior that benefits well-being. Expressing affection brings emotional, physical, and relational gains for people and their close connections. Sharing positive emotions yields health advantages like reduced stress hormones, lower cholesterol, lower blood pressure, and a stronger immune system. Expressing affection, not merely feeling affection, is internally rewarding. Even if not reciprocated, givers still experience its effects. Parental relationships Affectionate behavior is frequently considered an outcome of parental nurturing, tied to hormonal rewards. Both positive and negative parental actions may health issues in later life. Neglect and abuse result in poorer well-being and mental health, contrasting with affection's positive effects. A 2013 study highlighted the impact of early child abuse and lack of affection on physical health. Affectionism Affectionism is a school of thought that considers affections to be of central importance. Although it is not found in mainstream Western philosophy, it does exist in Indian philosophy. See also References External links Emotions Love Personal life Phrenology
Affection
[ "Biology" ]
632
[ "Phrenology", "Biology theories", "Obsolete biology theories" ]
2,665
https://en.wikipedia.org/wiki/Affray
In many legal jurisdictions related to English common law, affray is a public order offence consisting of the fighting of one or more persons in a public place to the terror (in ) of ordinary people. Depending on their actions, and the laws of the prevailing jurisdiction, those engaged in an affray may also render themselves liable to prosecution for assault, unlawful assembly, or riot; if so, it is for one of these offences that they are usually charged. United Kingdom England and Wales The common law offence of affray was abolished for England and Wales on 1 April 1987. Affray is now a statutory offence that is triable either way. It is created by section 3 of the Public Order Act 1986 which provides: The term "violence" is defined by section 8. Section 3(6) once provided that a constable could arrest without warrant anyone he reasonably suspected to be committing affray, but that subsection was repealed by paragraph 26(2) of Schedule 7 to, and Schedule 17 to, the Serious Organised Crime and Police Act 2005, which includes more general provisions for police to make arrests without warrant. The mens rea of affray is that person is guilty of affray only if he intends to use or threaten violence or is aware that his conduct may be violent or threaten violence. The offence of affray has been used by HM Government to address the problem of drunken or violent individuals who cause serious trouble on airliners. In R v Childs & Price (2015), the Court of Appeal quashed a murder verdict and replaced it with affray, having dismissed an allegation of common purpose. Northern Ireland Affray is a serious offence for the purposes of Chapter 3 of the Criminal Justice (Northern Ireland) Order 2008. Australia In New South Wales, section 93C of Crimes Act 1900 defines that a person will be guilty of affray if he or she threatens unlawful violence towards another and his or her conduct is such as would cause a person of reasonable firmness present at the scene to fear for his or her personal safety. A person will only be guilty of affray if the person intends to use or threaten violence or is aware that his or her conduct may be violent or threaten violence. The maximum penalty for an offence of affray contrary to section 93C is a period of imprisonment of 10 years. In Queensland, section 72 of the Criminal Code of 1899 defines affray as taking part in a fight in a public highway or taking part in a fight of such a nature as to alarm the public in any other place to which the public have access. This definition is taken from that in the English Criminal Code Bill of 1880, cl. 96. Section 72 says "Any person who takes part in a fight in a public place, or takes part in a fight of such a nature as to alarm the public in any other place to which the public have access, commits a misdemeanour. Maximum penalty—1 year’s imprisonment." In Victoria, Affray was a common law offence until 2017, when it was abolished and was replaced with the statutory offence that can be found under section 195H of the Crimes Act 1958 (Vic). The section defines Affray as the use or threat of unlawful violence by a person in a manner that would cause a person of reasonable firmness present at the scene to be terrified. However, a person who commits this conduct may only be found guilty of Affray if the use or threat of violence was intended, or if the person was reckless as to whether the conduct involves the use or threat of violence. If found guilty, the maximum penalty that may be imposed for Affray is imprisonment for 5 years or, if at the time of committing the offence the person was wearing a face covering used primarily to conceal their identity or to protect them from the effects of crowd-controlling substances, imprisonment for 7 years. India The Indian Penal Code (sect. 159) adopts the old English common law definition of affray, with the substitution of "actual disturbance of the peace for causing terror to the lieges". New Zealand In New Zealand affray has been codified as "fighting in a public place" by section 7 of the Summary Offences Act 1981. South Africa Under the Roman-Dutch law in force in South Africa affray falls within the definition of vis publica. United States In the United States, the English common law as to affray applies, subject to certain modifications by the statutes of particular states. See also Assault Battery Combat References Blackstones Police Manual Volume 4: General police duties, Fraser Simpson (2006). pp. 247. Oxford University Press. Crimes Legal terminology
Affray
[ "Biology" ]
960
[ "Behavior", "Aggression", "Human behavior", "Violence" ]
2,703
https://en.wikipedia.org/wiki/Aberration%20%28astronomy%29
In astronomy, aberration (also referred to as astronomical aberration, stellar aberration, or velocity aberration) is a phenomenon where celestial objects exhibit an apparent motion about their true positions based on the velocity of the observer: It causes objects to appear to be displaced towards the observer's direction of motion. The change in angle is of the order of where is the speed of light and the velocity of the observer. In the case of "stellar" or "annual" aberration, the apparent position of a star to an observer on Earth varies periodically over the course of a year as the Earth's velocity changes as it revolves around the Sun, by a maximum angle of approximately 20 arcseconds in right ascension or declination. The term aberration has historically been used to refer to a number of related phenomena concerning the propagation of light in moving bodies. Aberration is distinct from parallax, which is a change in the apparent position of a relatively nearby object, as measured by a moving observer, relative to more distant objects that define a reference frame. The amount of parallax depends on the distance of the object from the observer, whereas aberration does not. Aberration is also related to light-time correction and relativistic beaming, although it is often considered separately from these effects. Aberration is historically significant because of its role in the development of the theories of light, electromagnetism and, ultimately, the theory of special relativity. It was first observed in the late 1600s by astronomers searching for stellar parallax in order to confirm the heliocentric model of the Solar System. However, it was not understood at the time to be a different phenomenon. In 1727, James Bradley provided a classical explanation for it in terms of the finite speed of light relative to the motion of the Earth in its orbit around the Sun, which he used to make one of the earliest measurements of the speed of light. However, Bradley's theory was incompatible with 19th-century theories of light, and aberration became a major motivation for the aether drag theories of Augustin Fresnel (in 1818) and G. G. Stokes (in 1845), and for Hendrik Lorentz's aether theory of electromagnetism in 1892. The aberration of light, together with Lorentz's elaboration of Maxwell's electrodynamics, the moving magnet and conductor problem, the negative aether drift experiments, as well as the Fizeau experiment, led Albert Einstein to develop the theory of special relativity in 1905, which presents a general form of the equation for aberration in terms of such theory. Explanation Aberration may be explained as the difference in angle of a beam of light in different inertial frames of reference. A common analogy is to consider the apparent direction of falling rain. If rain is falling vertically in the frame of reference of a person standing still, then to a person moving forwards the rain will appear to arrive at an angle, requiring the moving observer to tilt their umbrella forwards. The faster the observer moves, the more tilt is needed. The net effect is that light rays striking the moving observer from the sides in a stationary frame will come angled from ahead in the moving observer's frame. This effect is sometimes called the "searchlight" or "headlight" effect. In the case of annual aberration of starlight, the direction of incoming starlight as seen in the Earth's moving frame is tilted relative to the angle observed in the Sun's frame. Since the direction of motion of the Earth changes during its orbit, the direction of this tilting changes during the course of the year, and causes the apparent position of the star to differ from its true position as measured in the inertial frame of the Sun. While classical reasoning gives intuition for aberration, it leads to a number of physical paradoxes observable even at the classical level (see history). The theory of special relativity is required to correctly account for aberration. The relativistic explanation is very similar to the classical one however, and in both theories aberration may be understood as a case of addition of velocities. Classical explanation In the Sun's frame, consider a beam of light with velocity equal to the speed of light , with x and y velocity components and , and thus at an angle such that . If the Earth is moving at velocity in the x direction relative to the Sun, then by velocity addition the x component of the beam's velocity in the Earth's frame of reference is , and the y velocity is unchanged, . Thus the angle of the light in the Earth's frame in terms of the angle in the Sun's frame is In the case of , this result reduces to , which in the limit may be approximated by . Relativistic explanation The reasoning in the relativistic case is the same except that the relativistic velocity addition formulas must be used, which can be derived from Lorentz transformations between different frames of reference. These formulas are where , giving the components of the light beam in the Earth's frame in terms of the components in the Sun's frame. The angle of the beam in the Earth's frame is thus In the case of , this result reduces to , and in the limit this may be approximated by . This relativistic derivation keeps the speed of light constant in all frames of reference, unlike the classical derivation above. Relationship to light-time correction and relativistic beaming Aberration is related to two other phenomena, light-time correction, which is due to the motion of an observed object during the time taken by its light to reach an observer, and relativistic beaming, which is an angling of the light emitted by a moving light source. It can be considered equivalent to them but in a different inertial frame of reference. In aberration, the observer is considered to be moving relative to a (for the sake of simplicity) stationary light source, while in light-time correction and relativistic beaming the light source is considered to be moving relative to a stationary observer. Consider the case of an observer and a light source moving relative to each other at constant velocity, with a light beam moving from the source to the observer. At the moment of emission, the beam in the observer's rest frame is tilted compared to the one in the source's rest frame, as understood through relativistic beaming. During the time it takes the light beam to reach the observer the light source moves in the observer's frame, and the 'true position' of the light source is displaced relative to the apparent position the observer sees, as explained by light-time correction. Finally, the beam in the observer's frame at the moment of observation is tilted compared to the beam in source's frame, which can be understood as an aberrational effect. Thus, a person in the light source's frame would describe the apparent tilting of the beam in terms of aberration, while a person in the observer's frame would describe it as a light-time effect. The relationship between these phenomena is only valid if the observer and source's frames are inertial frames. In practice, because the Earth is not an inertial rest frame but experiences centripetal acceleration towards the Sun, many aberrational effects such as annual aberration on Earth cannot be considered light-time corrections. However, if the time between emission and detection of the light is short compared to the orbital period of the Earth, the Earth may be approximated as an inertial frame and aberrational effects are equivalent to light-time corrections. Types The Astronomical Almanac describes several different types of aberration, arising from differing components of the Earth's and observed object's motion: Stellar aberration: "The apparent angular displacement of the observed position of a celestial body resulting from the motion of the observer. Stellar aberration is divided into diurnal, annual, and secular components." Annual aberration: "The component of stellar aberration resulting from the motion of the Earth about the Sun." Diurnal aberration: "The component of stellar aberration resulting from the observer's diurnal motion about the center of the Earth due to the Earth's rotation." Secular aberration: "The component of stellar aberration resulting from the essentially uniform and almost rectilinear motion of the entire solar system in space. Secular aberration is usually disregarded." Planetary aberration: "The apparent angular displacement of the observed position of a solar system body from its instantaneous geocentric direction as would be seen by an observer at the geocenter. This displacement is caused by the aberration of light and light-time displacement." Annual aberration Annual aberration is caused by the motion of an observer on Earth as the planet revolves around the Sun. Due to orbital eccentricity, the orbital velocity of Earth (in the Sun's rest frame) varies periodically during the year as the planet traverses its elliptic orbit and consequently the aberration also varies periodically, typically causing stars to appear to move in small ellipses. Approximating Earth's orbit as circular, the maximum displacement of a star due to annual aberration is known as the constant of aberration, conventionally represented by . It may be calculated using the relation substituting the Earth's average speed in the Sun's frame for and the speed of light . Its accepted value is 20.49552 arcseconds (sec) or 0.000099365 radians (rad) (at J2000). Assuming a circular orbit, annual aberration causes stars exactly on the ecliptic (the plane of Earth's orbit) to appear to move back and forth along a straight line, varying by on either side of their position in the Sun's frame. A star that is precisely at one of the ecliptic poles (at 90° from the ecliptic plane) will appear to move in a circle of radius about its true position, and stars at intermediate ecliptic latitudes will appear to move along a small ellipse. For illustration, consider a star at the northern ecliptic pole viewed by an observer at a point on the Arctic Circle. Such an observer will see the star transit at the zenith, once every day (strictly speaking sidereal day). At the time of the March equinox, Earth's orbit carries the observer in a southwards direction, and the star's apparent declination is therefore displaced to the south by an angle of . On the September equinox, the star's position is displaced to the north by an equal and opposite amount. On either solstice, the displacement in declination is 0. Conversely, the amount of displacement in right ascension is 0 on either equinox and at maximum on either solstice. In actuality, Earth's orbit is slightly elliptic rather than circular, and its speed varies somewhat over the course of its orbit, which means the description above is only approximate. Aberration is more accurately calculated using Earth's instantaneous velocity relative to the barycenter of the Solar System. Note that the displacement due to aberration is orthogonal to any displacement due to parallax. If parallax is detectable, the maximum displacement to the south would occur in December, and the maximum displacement to the north in June. It is this apparently anomalous motion that so mystified early astronomers. Solar annual aberration A special case of annual aberration is the nearly constant deflection of the Sun from its position in the Sun's rest frame by towards the west (as viewed from Earth), opposite to the apparent motion of the Sun along the ecliptic (which is from west to east, as seen from Earth). The deflection thus makes the Sun appear to be behind (or retarded) from its rest-frame position on the ecliptic by a position or angle . This deflection may equivalently be described as a light-time effect due to motion of the Earth during the 8.3 minutes that it takes light to travel from the Sun to Earth. The relation with is : [0.000099365 rad / 2 π rad] x [365.25 d x 24 h/d x 60 min/h] = 8.3167 min ≈ 8 min 19 sec = 499 sec. This is possible since the transit time of sunlight is short relative to the orbital period of the Earth, so the Earth's frame may be approximated as inertial. In the Earth's frame, the Sun moves, at a mean velocity v = 29.789 km/s, by a distance ≈ 14,864.7 km in the time it takes light to reach Earth, ≈ 499 sec for the orbit of mean radius = 1 AU = 149,597,870.7 km. This gives an angular correction ≈ 0.000099364 rad = 20.49539 sec, which can be solved to give ≈ 0.000099365 rad = 20.49559 sec, very nearly the same as the aberrational correction (here is in radian and not in arcsecond). Diurnal aberration Diurnal aberration is caused by the velocity of the observer on the surface of the rotating Earth. It is therefore dependent not only on the time of the observation, but also the latitude and longitude of the observer. Its effect is much smaller than that of annual aberration, and is only 0.32 arcseconds in the case of an observer at the Equator, where the rotational velocity is greatest. Secular aberration The secular component of aberration, caused by the motion of the Solar System in space, has been further subdivided into several components: aberration resulting from the motion of the solar system barycenter around the center of our Galaxy, aberration resulting from the motion of the Galaxy relative to the Local Group, and aberration resulting from the motion of the Local Group relative to the cosmic microwave background. Secular aberration affects the apparent positions of stars and extragalactic objects. The large, constant part of secular aberration cannot be directly observed and "It has been standard practice to absorb this large, nearly constant effect into the reported" positions of stars. In about 200 million years, the Sun circles the galactic center, whose measured location is near right ascension (α = 266.4°) and declination (δ = −29.0°). The constant, unobservable, effect of the solar system's motion around the galactic center has been computed variously as 150 or 165 arcseconds. The other, observable, part is an acceleration toward the galactic center of approximately 2.5 × 10−10 m/s2, which yields a change of aberration of about 5 μas/yr. Highly precise measurements extending over several years can observe this change in secular aberration, often called the secular aberration drift or the acceleration of the Solar System, as a small apparent proper motion. Recently, highly precise astrometry of extragalactic objects using both Very Long Baseline Interferometry and the Gaia space observatory have successfully measured this small effect. The first VLBI measurement of the apparent motion, over a period of 20 years, of 555 extragalactic objects towards the center of our galaxy at equatorial coordinates of α = 263° and δ = −20° indicated a secular aberration drift 6.4 ±1.5 μas/yr. Later determinations using a series of VLBI measurements extending over almost 40 years determined the secular aberration drift to be 5.83 ± 0.23 μas/yr in the direction α = 270.2 ± 2.3° and δ = −20.2° ± 3.6°. Optical observations using only 33 months of Gaia satellite data of 1.6 million extragalactic sources indicated an acceleration of the solar system of 2.32 ± 0.16 × 10−10 m/s2 and a corresponding secular aberration drift of 5.05 ± 0.35 μas/yr in the direction of α = 269.1° ± 5.4°, δ = −31.6° ± 4.1°. It is expected that later Gaia data releases, incorporating about 66 and 120 months of data, will reduce the random errors of these results by factors of 0.35 and 0.15. The latest edition of the International Celestial Reference Frame (ICRF3) adopted a recommended galactocentric aberration constant of 5.8 μas/yr and recommended a correction for secular aberration to obtain the highest positional accuracy for times other than the reference epoch 2015.0. Planetary aberration Planetary aberration is the combination of the aberration of light (due to Earth's velocity) and light-time correction (due to the object's motion and distance), as calculated in the rest frame of the Solar System. Both are determined at the instant when the moving object's light reaches the moving observer on Earth. It is so called because it is usually applied to planets and other objects in the Solar System whose motion and distance are accurately known. Discovery and first observations The discovery of the aberration of light was totally unexpected, and it was only by considerable perseverance and perspicacity that Bradley was able to explain it in 1727. It originated from attempts to discover whether stars possessed appreciable parallaxes. Search for stellar parallax The Copernican heliocentric theory of the Solar System had received confirmation by the observations of Galileo and Tycho Brahe and the mathematical investigations of Kepler and Newton. As early as 1573, Thomas Digges had suggested that parallactic shifting of the stars should occur according to the heliocentric model, and consequently if stellar parallax could be observed it would help confirm this theory. Many observers claimed to have determined such parallaxes, but Tycho Brahe and Giovanni Battista Riccioli concluded that they existed only in the minds of the observers, and were due to instrumental and personal errors. However, in 1680 Jean Picard, in his Voyage d'Uranibourg, stated, as a result of ten years' observations, that Polaris, the Pole Star, exhibited variations in its position amounting to 40 annually. Some astronomers endeavoured to explain this by parallax, but these attempts failed because the motion differed from that which parallax would produce. John Flamsteed, from measurements made in 1689 and succeeding years with his mural quadrant, similarly concluded that the declination of Polaris was 40 less in July than in September. Robert Hooke, in 1674, published his observations of γ Draconis, a star of magnitude 2m which passes practically overhead at the latitude of London (hence its observations are largely free from the complex corrections due to atmospheric refraction), and concluded that this star was 23 more northerly in July than in October. James Bradley's observations Consequently, when Bradley and Samuel Molyneux entered this sphere of research in 1725, there was still considerable uncertainty as to whether stellar parallaxes had been observed or not, and it was with the intention of definitely answering this question that they erected a large telescope at Molyneux's house at Kew. They decided to reinvestigate the motion of γ Draconis with a telescope constructed by George Graham (1675–1751), a celebrated instrument-maker. This was fixed to a vertical chimney stack in such manner as to permit a small oscillation of the eyepiece, the amount of which (i.e. the deviation from the vertical) was regulated and measured by the introduction of a screw and a plumb line. The instrument was set up in November 1725, and observations on γ Draconis were made starting in December. The star was observed to move 40 southwards between September and March, and then reversed its course from March to September. At the same time, 35 Camelopardalis, a star with a right ascension nearly exactly opposite to that of γ Draconis, was 19" more northerly at the beginning of March than in September. The asymmetry of these results, which were expected to be mirror images of each other, were completely unexpected and inexplicable by existing theories. Early hypotheses Bradley and Molyneux discussed several hypotheses in the hope of finding the solution. Since the apparent motion was evidently caused neither by parallax nor observational errors, Bradley first hypothesized that it could be due to oscillations in the orientation of the Earth's axis relative to the celestial sphere – a phenomenon known as nutation. 35 Camelopardalis was seen to possess an apparent motion which could be consistent with nutation, but since its declination varied only one half as much as that of γ Draconis, it was obvious that nutation did not supply the answer (however, Bradley later went on to discover that the Earth does indeed nutate). He also investigated the possibility that the motion was due to an irregular distribution of the Earth's atmosphere, thus involving abnormal variations in the refractive index, but again obtained negative results. On August 19, 1727, Bradley embarked upon a further series of observations using a telescope of his own erected at the Rectory, Wanstead. This instrument had the advantage of a larger field of view and he was able to obtain precise positions of a large number of stars over the course of about twenty years. During his first two years at Wanstead, he established the existence of the phenomenon of aberration beyond all doubt, and this also enabled him to formulate a set of rules that would allow the calculation of the effect on any given star at a specified date. Development of the theory of aberration Bradley eventually developed his explanation of aberration in about September 1728 and this theory was presented to the Royal Society in mid January the following year. One well-known story was that he saw the change of direction of a wind vane on a boat on the Thames, caused not by an alteration of the wind itself, but by a change of course of the boat relative to the wind direction. However, there is no record of this incident in Bradley's own account of the discovery, and it may therefore be apocryphal. The following table shows the magnitude of deviation from true declination for γ Draconis and the direction, on the planes of the solstitial colure and ecliptic prime meridian, of the tangent of the velocity of the Earth in its orbit for each of the four months where the extremes are found, as well as expected deviation from true ecliptic longitude if Bradley had measured its deviation from right ascension: Bradley proposed that the aberration of light not only affected declination, but right ascension as well, so that a star in the pole of the ecliptic would describe a little ellipse with a diameter of about 40", but for simplicity, he assumed it to be a circle. Since he only observed the deviation in declination, and not in right ascension, his calculations for the maximum deviation of a star in the pole of the ecliptic are for its declination only, which will coincide with the diameter of the little circle described by such star. For eight different stars, his calculations are as follows: Based on these calculations, Bradley was able to estimate the constant of aberration at 20.2", which is equal to 0.00009793 radians, and with this was able to estimate the speed of light at per second. By projecting the little circle for a star in the pole of the ecliptic, he could simplify the calculation of the relationship between the speed of light and the speed of the Earth's annual motion in its orbit as follows: Thus, the speed of light to the speed of the Earth's annual motion in its orbit is 10,210 to one, from whence it would follow, that light moves, or is propagated as far as from the Sun to the Earth in 8 minutes 12 seconds. The original motivation of the search for stellar parallax was to test the Copernican theory that the Earth revolves around the Sun. The change of aberration in the course of the year demonstrates the relative motion of the Earth and the stars. Retrodiction on Descartes' lightspeed argument In the prior century, René Descartes argued that if light were not instantaneous, then shadows of moving objects would lag; and if propagation times over terrestrial distances were appreciable, then during a lunar eclipse the Sun, Earth, and Moon would be out of alignment by hours' motion, contrary to observation. Huygens commented that, on Rømer's lightspeed data (yielding an earth-moon round-trip time of only seconds), the lag angle would be imperceptible. What they both overlooked is that aberration (as understood only later) would exactly counteract the lag even if large, leaving this eclipse method completely insensitive to light speed. (Otherwise, shadow-lag methods could be made to sense absolute translational motion, contrary to a basic principle of relativity.) Historical theories of aberration The phenomenon of aberration became a driving force for many physical theories during the 200 years between its observation and the explanation by Albert Einstein. The first classical explanation was provided in 1729, by James Bradley as described above, who attributed it to the finite speed of light and the motion of Earth in its orbit around the Sun. However, this explanation proved inaccurate once the wave nature of light was better understood, and correcting it became a major goal of the 19th century theories of luminiferous aether. Augustin-Jean Fresnel proposed a correction due to the motion of a medium (the aether) through which light propagated, known as "partial aether drag". He proposed that objects partially drag the aether along with them as they move, and this became the accepted explanation for aberration for some time. George Stokes proposed a similar theory, explaining that aberration occurs due to the flow of aether induced by the motion of the Earth. Accumulated evidence against these explanations, combined with new understanding of the electromagnetic nature of light, led Hendrik Lorentz to develop an electron theory which featured an immobile aether, and he explained that objects contract in length as they move through the aether. Motivated by these previous theories, Albert Einstein then developed the theory of special relativity in 1905, which provides the modern account of aberration. Bradley's classical explanation Bradley conceived of an explanation in terms of a corpuscular theory of light in which light is made of particles. His classical explanation appeals to the motion of the earth relative to a beam of light-particles moving at a finite velocity, and is developed in the Sun's frame of reference, unlike the classical derivation given above. Consider the case where a distant star is motionless relative to the Sun, and the star is extremely far away, so that parallax may be ignored. In the rest frame of the Sun, this means light from the star travels in parallel paths to the Earth observer, and arrives at the same angle regardless of where the Earth is in its orbit. Suppose the star is observed on Earth with a telescope, idealized as a narrow tube. The light enters the tube from the star at angle and travels at speed taking a time to reach the bottom of the tube, where it is detected. Suppose observations are made from Earth, which is moving with a speed . During the transit of the light, the tube moves a distance . Consequently, for the particles of light to reach the bottom of the tube, the tube must be inclined at an angle different from , resulting in an apparent position of the star at angle . As the Earth proceeds in its orbit it changes direction, so changes with the time of year the observation is made. The apparent angle and true angle are related using trigonometry as: . In the case of , this gives . While this is different from the more accurate relativistic result described above, in the limit of small angle and low velocity they are approximately the same, within the error of the measurements of Bradley's day. These results allowed Bradley to make one of the earliest measurements of the speed of light. Luminiferous aether In the early nineteenth century the wave theory of light was being rediscovered, and in 1804 Thomas Young adapted Bradley's explanation for corpuscular light to wavelike light traveling through a medium known as the luminiferous aether. His reasoning was the same as Bradley's, but it required that this medium be immobile in the Sun's reference frame and must pass through the earth unaffected, otherwise the medium (and therefore the light) would move along with the earth and no aberration would be observed. He wrote: However, it soon became clear Young's theory could not account for aberration when materials with a non-vacuum refractive index were present. An important example is of a telescope filled with water. The speed of light in such a telescope will be slower than in vacuum, and is given by rather than where is the refractive index of the water. Thus, by Bradley and Young's reasoning the aberration angle is given by . which predicts a medium-dependent angle of aberration. When refraction at the telescope's objective is taken into account this result deviates even more from the vacuum result. In 1810 François Arago performed a similar experiment and found that the aberration was unaffected by the medium in the telescope, providing solid evidence against Young's theory. This experiment was subsequently verified by many others in the following decades, most accurately by Airy in 1871, with the same result. Aether drag models Fresnel's aether drag In 1818, Augustin Fresnel developed a modified explanation to account for the water telescope and for other aberration phenomena. He explained that the aether is generally at rest in the Sun's frame of reference, but objects partially drag the aether along with them as they move. That is, the aether in an object of index of refraction moving at velocity is partially dragged with a velocity bringing the light along with it. This factor is known as "Fresnel's dragging coefficient". This dragging effect, along with refraction at the telescope's objective, compensates for the slower speed of light in the water telescope in Bradley's explanation. With this modification Fresnel obtained Bradley's vacuum result even for non-vacuum telescopes, and was also able to predict many other phenomena related to the propagation of light in moving bodies. Fresnel's dragging coefficient became the dominant explanation of aberration for the next decades. Stokes' aether drag However, the fact that light is polarized (discovered by Fresnel himself) led scientists such as Cauchy and Green to believe that the aether was a totally immobile elastic solid as opposed to Fresnel's fluid aether. There was thus renewed need for an explanation of aberration consistent both with Fresnel's predictions (and Arago's observations) as well as polarization. In 1845, Stokes proposed a 'putty-like' aether which acts as a liquid on large scales but as a solid on small scales, thus supporting both the transverse vibrations required for polarized light and the aether flow required to explain aberration. Making only the assumptions that the fluid is irrotational and that the boundary conditions of the flow are such that the aether has zero velocity far from the Earth, but moves at the Earth's velocity at its surface and within it, he was able to completely account for aberration. The velocity of the aether outside of the Earth would decrease as a function of distance from the Earth so light rays from stars would be progressively dragged as they approached the surface of the Earth. The Earth's motion would be unaffected by the aether due to D'Alembert's paradox. Both Fresnel and Stokes' theories were popular. However, the question of aberration was put aside during much of the second half of the 19th century as focus of inquiry turned to the electromagnetic properties of aether. Lorentz' length contraction In the 1880s once electromagnetism was better understood, interest turned again to the problem of aberration. By this time flaws were known to both Fresnel's and Stokes' theories. Fresnel's theory required that the relative velocity of aether and matter to be different for light of different colors, and it was shown that the boundary conditions Stokes had assumed in his theory were inconsistent with his assumption of irrotational flow. At the same time, the modern theories of electromagnetic aether could not account for aberration at all. Many scientists such as Maxwell, Heaviside and Hertz unsuccessfully attempted to solve these problems by incorporating either Fresnel or Stokes' theories into Maxwell's new electromagnetic laws. Hendrik Lorentz spent considerable effort along these lines. After working on this problem for a decade, the issues with Stokes' theory caused him to abandon it and to follow Fresnel's suggestion of a (mostly) stationary aether (1892, 1895). However, in Lorentz's model the aether was completely immobile, like the electromagnetic aethers of Cauchy, Green and Maxwell and unlike Fresnel's aether. He obtained Fresnel's dragging coefficient from modifications of Maxwell's electromagnetic theory, including a modification of the time coordinates in moving frames ("local time"). In order to explain the Michelson–Morley experiment (1887), which apparently contradicted both Fresnel's and Lorentz's immobile aether theories, and apparently confirmed Stokes' complete aether drag, Lorentz theorized (1892) that objects undergo "length contraction" by a factor of in the direction of their motion through the aether. In this way, aberration (and all related optical phenomena) can be accounted for in the context of an immobile aether. Lorentz' theory became the basis for much research in the next decade, and beyond. Its predictions for aberration are identical to those of the relativistic theory. Special relativity Lorentz' theory matched experiment well, but it was complicated and made many unsubstantiated physical assumptions about the microscopic nature of electromagnetic media. In his 1905 theory of special relativity, Albert Einstein reinterpreted the results of Lorentz' theory in a much simpler and more natural conceptual framework which disposed of the idea of an aether. His derivation is given above, and is now the accepted explanation. Robert S. Shankland reported some conversations with Einstein, in which Einstein emphasized the importance of aberration: Other important motivations for Einstein's development of relativity were the moving magnet and conductor problem and (indirectly) the negative aether drift experiments, already mentioned by him in the introduction of his first relativity paper. Einstein wrote in a note in 1952: While Einstein's result is the same as Bradley's original equation except for an extra factor of , Bradley's result does not merely give the classical limit of the relativistic case, in the sense that it gives incorrect predictions even at low relative velocities. Bradley's explanation cannot account for situations such as the water telescope, nor for many other optical effects (such as interference) that might occur within the telescope. This is because in the Earth's frame it predicts that the direction of propagation of the light beam in the telescope is not normal to the wavefronts of the beam, in contradiction with Maxwell's theory of electromagnetism. It also does not preserve the speed of light c between frames. However, Bradley did correctly infer that the effect was due to relative velocities. See also Apparent place Stellar parallax Astronomical nutation Proper motion Timeline of electromagnetism and classical optics Relativistic aberration Notes References Further reading P. Kenneth Seidelmann (Ed.), Explanatory Supplement to the Astronomical Almanac (University Science Books, 1992), 127–135, 700. Stephen Peter Rigaud, Miscellaneous Works and Correspondence of the Rev. James Bradley, D.D. F.R.S. (1832). Charles Hutton, Mathematical and Philosophical Dictionary (1795). H. H. Turner, Astronomical Discovery (1904). Thomas Simpson, Essays on Several Curious and Useful Subjects in Speculative and Mix'd Mathematicks (1740). :de:August Ludwig Busch, Reduction of the Observations Made by Bradley at Kew and Wansted to Determine the Quantities of Aberration and Nutation (1838). External links Courtney Seligman on Bradley's observations Electromagnetic radiation Astrometry Radiation
Aberration (astronomy)
[ "Physics", "Chemistry", "Astronomy" ]
7,712
[ "Transport phenomena", "Physical phenomena", "Electromagnetic radiation", "Observational astronomy", "Astrometry", "Waves", "Radiation", "Astronomical sub-disciplines" ]
2,722
https://en.wikipedia.org/wiki/Anders%20Celsius
Anders Celsius (; 27 November 170125 April 1744) was a Swedish astronomer, physicist and mathematician. He was professor of astronomy at Uppsala University from 1730 to 1744, but traveled from 1732 to 1735 visiting notable observatories in Germany, Italy and France. He founded the Uppsala Astronomical Observatory in 1741, and in 1742 proposed (an inverted form of) the Centigrade temperature scale which was later renamed Celsius in his honour. Early life and education Anders Celsius was born in Uppsala, Sweden, on 27 November 1701. His family originated from Ovanåker in the province of Hälsingland. Their family estate was at Doma, also known as Höjen or Högen (locally as Högen 2). The name Celsius is a latinization of the estate's name (Latin 'mound'). As the son of an astronomy professor, Nils Celsius, nephew of botanist Olof Celsius and the grandson of the mathematician Magnus Celsius and the astronomer Anders Spole, Celsius chose a career in science. He was a talented mathematician from an early age. Anders Celsius studied at Uppsala University, where his father was a teacher, and in 1730 he, too, became a professor of astronomy there. Noted Swedish dramatic poet and actor Johan Celsius was also his uncle. Career In 1730, Celsius published the (New Method for Determining the Distance from the Earth to the Sun). His research also involved the study of auroral phenomena, which he conducted with his assistant Olof Hiorter, and he was the first to suggest a connection between the aurora borealis and changes in the magnetic field of the Earth. He observed the variations of a compass needle and found that larger deflections correlated with stronger auroral activity. At Nuremberg in 1733, he published a collection of 316 observations of the aurora borealis made by himself and others over the period 1716–1732. Celsius traveled frequently in the early 1730s, including to Germany, Italy and France, when he visited most of the major European observatories. In Paris he advocated the measurement of an arc of the meridian in Lapland. In 1736, he participated in the expedition organized for that purpose by the French Academy of Sciences, led by the French mathematician Pierre Louis Maupertuis (1698–1759) to measure a degree of latitude. The aim of the expedition was to measure the length of a degree along a meridian, close to the pole, and compare the result with a similar expedition to Peru, today in Ecuador, near the equator. The expeditions confirmed Isaac Newton's belief that the shape of the Earth is an ellipsoid flattened at the poles. In 1738, he published the (Observations on Determining the Shape of the Earth). Celsius's participation in the Lapland expedition won him much respect in Sweden with the government and his peers, and played a key role in generating interest from the Swedish authorities in donating the resources required to construct a new modern observatory in Uppsala. He was successful in the request, and Celsius founded the Uppsala Astronomical Observatory in 1741. The observatory was equipped with instruments purchased during his long voyage abroad, comprising the most modern instrumental technology of the period. He made observations of eclipses and various astronomical objects and published catalogues of carefully determined magnitudes for some 300 stars using his own photometric system (mean error=0.4 mag). In 1742 he proposed the Celsius temperature scale in a paper to the Royal Society of Sciences in Uppsala, the oldest Swedish scientific society, founded in 1710. His thermometer was calibrated with a value of 0 for the boiling point of water and 100 for the freezing point. In 1745, a year after Celsius's death, the scale was reversed by Carl Linnaeus to facilitate more practical measurement. Celsius conducted many geographical measurements for the Swedish General map, and was one of earliest to note that much of Scandinavia is slowly rising above sea level, a continuous process which has been occurring since the melting of the ice from the latest ice age. However, he wrongly posed the notion that the water was evaporating. In 1725 he became secretary of the Royal Society of Sciences in Uppsala, and served at this post until his death from tuberculosis in 1744. He supported the formation of the Royal Swedish Academy of Sciences in Stockholm in 1739 by Linnaeus and five others, and was elected a member at the first meeting of this academy. It was in fact Celsius who proposed the new academy's name. Works See also Celsius family Daniel Gabriel Fahrenheit References Citations Sources External links Johan Celsius - Historical records and family trees at MyHeritage 1701 births 1744 deaths 18th-century Swedish astronomers 18th-century deaths from tuberculosis People from Uppsala Uppsala University alumni Academic staff of Uppsala University Members of the Royal Swedish Academy of Sciences Tuberculosis deaths in Sweden 18th-century Swedish mathematicians Fellows of the Royal Society Creators of temperature scales Age of Liberty people
Anders Celsius
[ "Physics" ]
1,019
[ "Scales of temperature", "Physical quantities", "Creators of temperature scales" ]
2,724
https://en.wikipedia.org/wiki/Autocorrelation
Autocorrelation, sometimes known as serial correlation in the discrete time case, is the correlation of a signal with a delayed copy of itself as a function of delay. Informally, it is the similarity between observations of a random variable as a function of the time lag between them. The analysis of autocorrelation is a mathematical tool for finding repeating patterns, such as the presence of a periodic signal obscured by noise, or identifying the missing fundamental frequency in a signal implied by its harmonic frequencies. It is often used in signal processing for analyzing functions or series of values, such as time domain signals. Different fields of study define autocorrelation differently, and not all of these definitions are equivalent. In some fields, the term is used interchangeably with autocovariance. Unit root processes, trend-stationary processes, autoregressive processes, and moving average processes are specific forms of processes with autocorrelation. Autocorrelation of stochastic processes In statistics, the autocorrelation of a real or complex random process is the Pearson correlation between values of the process at different times, as a function of the two times or of the time lag. Let be a random process, and be any point in time ( may be an integer for a discrete-time process or a real number for a continuous-time process). Then is the value (or realization) produced by a given run of the process at time . Suppose that the process has mean and variance at time , for each . Then the definition of the autocorrelation function between times and is where is the expected value operator and the bar represents complex conjugation. Note that the expectation may not be well defined. Subtracting the mean before multiplication yields the auto-covariance function between times and : Note that this expression is not well defined for all-time series or processes, because the mean may not exist, or the variance may be zero (for a constant process) or infinite (for processes with distribution lacking well-behaved moments, such as certain types of power law). Definition for wide-sense stationary stochastic process If is a wide-sense stationary process then the mean and the variance are time-independent, and further the autocovariance function depends only on the lag between and : the autocovariance depends only on the time-distance between the pair of values but not on their position in time. This further implies that the autocovariance and autocorrelation can be expressed as a function of the time-lag, and that this would be an even function of the lag . This gives the more familiar forms for the autocorrelation function and the auto-covariance function: In particular, note that Normalization It is common practice in some disciplines (e.g. statistics and time series analysis) to normalize the autocovariance function to get a time-dependent Pearson correlation coefficient. However, in other disciplines (e.g. engineering) the normalization is usually dropped and the terms "autocorrelation" and "autocovariance" are used interchangeably. The definition of the autocorrelation coefficient of a stochastic process is If the function is well defined, its value must lie in the range , with 1 indicating perfect correlation and −1 indicating perfect anti-correlation. For a wide-sense stationary (WSS) process, the definition is . The normalization is important both because the interpretation of the autocorrelation as a correlation provides a scale-free measure of the strength of statistical dependence, and because the normalization has an effect on the statistical properties of the estimated autocorrelations. Properties Symmetry property The fact that the autocorrelation function is an even function can be stated as respectively for a WSS process: Maximum at zero For a WSS process: Notice that is always real. Cauchy–Schwarz inequality The Cauchy–Schwarz inequality, inequality for stochastic processes: Autocorrelation of white noise The autocorrelation of a continuous-time white noise signal will have a strong peak (represented by a Dirac delta function) at and will be exactly for all other . Wiener–Khinchin theorem The Wiener–Khinchin theorem relates the autocorrelation function to the power spectral density via the Fourier transform: For real-valued functions, the symmetric autocorrelation function has a real symmetric transform, so the Wiener–Khinchin theorem can be re-expressed in terms of real cosines only: Autocorrelation of random vectors The (potentially time-dependent) autocorrelation matrix (also called second moment) of a (potentially time-dependent) random vector is an matrix containing as elements the autocorrelations of all pairs of elements of the random vector . The autocorrelation matrix is used in various digital signal processing algorithms. For a random vector containing random elements whose expected value and variance exist, the autocorrelation matrix is defined by where denotes the transposed matrix of dimensions . Written component-wise: If is a complex random vector, the autocorrelation matrix is instead defined by Here denotes Hermitian transpose. For example, if is a random vector, then is a matrix whose -th entry is . Properties of the autocorrelation matrix The autocorrelation matrix is a Hermitian matrix for complex random vectors and a symmetric matrix for real random vectors. The autocorrelation matrix is a positive semidefinite matrix, i.e. for a real random vector, and respectively in case of a complex random vector. All eigenvalues of the autocorrelation matrix are real and non-negative. The auto-covariance matrix is related to the autocorrelation matrix as follows:Respectively for complex random vectors: Autocorrelation of deterministic signals In signal processing, the above definition is often used without the normalization, that is, without subtracting the mean and dividing by the variance. When the autocorrelation function is normalized by mean and variance, it is sometimes referred to as the autocorrelation coefficient or autocovariance function. Autocorrelation of continuous-time signal Given a signal , the continuous autocorrelation is most often defined as the continuous cross-correlation integral of with itself, at lag . where represents the complex conjugate of . Note that the parameter in the integral is a dummy variable and is only necessary to calculate the integral. It has no specific meaning. Autocorrelation of discrete-time signal The discrete autocorrelation at lag for a discrete-time signal is The above definitions work for signals that are square integrable, or square summable, that is, of finite energy. Signals that "last forever" are treated instead as random processes, in which case different definitions are needed, based on expected values. For wide-sense-stationary random processes, the autocorrelations are defined as For processes that are not stationary, these will also be functions of , or . For processes that are also ergodic, the expectation can be replaced by the limit of a time average. The autocorrelation of an ergodic process is sometimes defined as or equated to These definitions have the advantage that they give sensible well-defined single-parameter results for periodic functions, even when those functions are not the output of stationary ergodic processes. Alternatively, signals that last forever can be treated by a short-time autocorrelation function analysis, using finite time integrals. (See short-time Fourier transform for a related process.) Definition for periodic signals If is a continuous periodic function of period , the integration from to is replaced by integration over any interval of length : which is equivalent to Properties In the following, we will describe properties of one-dimensional autocorrelations only, since most properties are easily transferred from the one-dimensional case to the multi-dimensional cases. These properties hold for wide-sense stationary processes. A fundamental property of the autocorrelation is symmetry, , which is easy to prove from the definition. In the continuous case, the autocorrelation is an even function when is a real function, and the autocorrelation is a Hermitian function when is a complex function. The continuous autocorrelation function reaches its peak at the origin, where it takes a real value, i.e. for any delay , . This is a consequence of the rearrangement inequality. The same result holds in the discrete case. The autocorrelation of a periodic function is, itself, periodic with the same period. The autocorrelation of the sum of two completely uncorrelated functions (the cross-correlation is zero for all ) is the sum of the autocorrelations of each function separately. Since autocorrelation is a specific type of cross-correlation, it maintains all the properties of cross-correlation. By using the symbol to represent convolution and is a function which manipulates the function and is defined as , the definition for may be written as: Multi-dimensional autocorrelation Multi-dimensional autocorrelation is defined similarly. For example, in three dimensions the autocorrelation of a square-summable discrete signal would be When mean values are subtracted from signals before computing an autocorrelation function, the resulting function is usually called an auto-covariance function. Efficient computation For data expressed as a discrete sequence, it is frequently necessary to compute the autocorrelation with high computational efficiency. A brute force method based on the signal processing definition can be used when the signal size is small. For example, to calculate the autocorrelation of the real signal sequence (i.e. , and for all other values of ) by hand, we first recognize that the definition just given is the same as the "usual" multiplication, but with right shifts, where each vertical addition gives the autocorrelation for particular lag values: Thus the required autocorrelation sequence is , where and the autocorrelation for other lag values being zero. In this calculation we do not perform the carry-over operation during addition as is usual in normal multiplication. Note that we can halve the number of operations required by exploiting the inherent symmetry of the autocorrelation. If the signal happens to be periodic, i.e. then we get a circular autocorrelation (similar to circular convolution) where the left and right tails of the previous autocorrelation sequence will overlap and give which has the same period as the signal sequence The procedure can be regarded as an application of the convolution property of Z-transform of a discrete signal. While the brute force algorithm is order , several efficient algorithms exist which can compute the autocorrelation in order . For example, the Wiener–Khinchin theorem allows computing the autocorrelation from the raw data with two fast Fourier transforms (FFT): where IFFT denotes the inverse fast Fourier transform. The asterisk denotes complex conjugate. Alternatively, a multiple correlation can be performed by using brute force calculation for low values, and then progressively binning the data with a logarithmic density to compute higher values, resulting in the same efficiency, but with lower memory requirements. Estimation For a discrete process with known mean and variance for which we observe observations , an estimate of the autocorrelation coefficient may be obtained as for any positive integer . When the true mean and variance are known, this estimate is unbiased. If the true mean and variance of the process are not known there are several possibilities: If and are replaced by the standard formulae for sample mean and sample variance, then this is a biased estimate. A periodogram-based estimate replaces in the above formula with . This estimate is always biased; however, it usually has a smaller mean squared error. Other possibilities derive from treating the two portions of data and separately and calculating separate sample means and/or sample variances for use in defining the estimate. The advantage of estimates of the last type is that the set of estimated autocorrelations, as a function of , then form a function which is a valid autocorrelation in the sense that it is possible to define a theoretical process having exactly that autocorrelation. Other estimates can suffer from the problem that, if they are used to calculate the variance of a linear combination of the 's, the variance calculated may turn out to be negative. Regression analysis In regression analysis using time series data, autocorrelation in a variable of interest is typically modeled either with an autoregressive model (AR), a moving average model (MA), their combination as an autoregressive-moving-average model (ARMA), or an extension of the latter called an autoregressive integrated moving average model (ARIMA). With multiple interrelated data series, vector autoregression (VAR) or its extensions are used. In ordinary least squares (OLS), the adequacy of a model specification can be checked in part by establishing whether there is autocorrelation of the regression residuals. Problematic autocorrelation of the errors, which themselves are unobserved, can generally be detected because it produces autocorrelation in the observable residuals. (Errors are also known as "error terms" in econometrics.) Autocorrelation of the errors violates the ordinary least squares assumption that the error terms are uncorrelated, meaning that the Gauss Markov theorem does not apply, and that OLS estimators are no longer the Best Linear Unbiased Estimators (BLUE). While it does not bias the OLS coefficient estimates, the standard errors tend to be underestimated (and the t-scores overestimated) when the autocorrelations of the errors at low lags are positive. The traditional test for the presence of first-order autocorrelation is the Durbin–Watson statistic or, if the explanatory variables include a lagged dependent variable, Durbin's h statistic. The Durbin-Watson can be linearly mapped however to the Pearson correlation between values and their lags. A more flexible test, covering autocorrelation of higher orders and applicable whether or not the regressors include lags of the dependent variable, is the Breusch–Godfrey test. This involves an auxiliary regression, wherein the residuals obtained from estimating the model of interest are regressed on (a) the original regressors and (b) k lags of the residuals, where 'k' is the order of the test. The simplest version of the test statistic from this auxiliary regression is TR2, where T is the sample size and R2 is the coefficient of determination. Under the null hypothesis of no autocorrelation, this statistic is asymptotically distributed as with k degrees of freedom. Responses to nonzero autocorrelation include generalized least squares and the Newey–West HAC estimator (Heteroskedasticity and Autocorrelation Consistent). In the estimation of a moving average model (MA), the autocorrelation function is used to determine the appropriate number of lagged error terms to be included. This is based on the fact that for an MA process of order q, we have , for , and , for . Applications Autocorrelation's ability to find repeating patterns in data yields many applications, including: Autocorrelation analysis is used heavily in fluorescence correlation spectroscopy to provide quantitative insight into molecular-level diffusion and chemical reactions. Another application of autocorrelation is the measurement of optical spectra and the measurement of very-short-duration light pulses produced by lasers, both using optical autocorrelators. Autocorrelation is used to analyze dynamic light scattering data, which notably enables determination of the particle size distributions of nanometer-sized particles or micelles suspended in a fluid. A laser shining into the mixture produces a speckle pattern that results from the motion of the particles. Autocorrelation of the signal can be analyzed in terms of the diffusion of the particles. From this, knowing the viscosity of the fluid, the sizes of the particles can be calculated. Utilized in the GPS system to correct for the propagation delay, or time shift, between the point of time at the transmission of the carrier signal at the satellites, and the point of time at the receiver on the ground. This is done by the receiver generating a replica signal of the 1,023-bit C/A (Coarse/Acquisition) code, and generating lines of code chips [-1,1] in packets of ten at a time, or 10,230 chips (1,023 × 10), shifting slightly as it goes along in order to accommodate for the doppler shift in the incoming satellite signal, until the receiver replica signal and the satellite signal codes match up. The small-angle X-ray scattering intensity of a nanostructured system is the Fourier transform of the spatial autocorrelation function of the electron density. In surface science and scanning probe microscopy, autocorrelation is used to establish a link between surface morphology and functional characteristics. In optics, normalized autocorrelations and cross-correlations give the degree of coherence of an electromagnetic field. In astronomy, autocorrelation can determine the frequency of pulsars. In music, autocorrelation (when applied at time scales smaller than a second) is used as a pitch detection algorithm for both instrument tuners and "Auto Tune" (used as a distortion effect or to fix intonation). When applied at time scales larger than a second, autocorrelation can identify the musical beat, for example to determine tempo. Autocorrelation in space rather than time, via the Patterson function, is used by X-ray diffractionists to help recover the "Fourier phase information" on atom positions not available through diffraction alone. In statistics, spatial autocorrelation between sample locations also helps one estimate mean value uncertainties when sampling a heterogeneous population. The SEQUEST algorithm for analyzing mass spectra makes use of autocorrelation in conjunction with cross-correlation to score the similarity of an observed spectrum to an idealized spectrum representing a peptide. In astrophysics, autocorrelation is used to study and characterize the spatial distribution of galaxies in the universe and in multi-wavelength observations of low mass X-ray binaries. In panel data, spatial autocorrelation refers to correlation of a variable with itself through space. In analysis of Markov chain Monte Carlo data, autocorrelation must be taken into account for correct error determination. In geosciences (specifically in geophysics) it can be used to compute an autocorrelation seismic attribute, out of a 3D seismic survey of the underground. In medical ultrasound imaging, autocorrelation is used to visualize blood flow. In intertemporal portfolio choice, the presence or absence of autocorrelation in an asset's rate of return can affect the optimal portion of the portfolio to hold in that asset. In numerical relays, autocorrelation has been used to accurately measure power system frequency. Serial dependence Serial dependence is closely linked to the notion of autocorrelation, but represents a distinct concept (see Correlation and dependence). In particular, it is possible to have serial dependence but no (linear) correlation. In some fields however, the two terms are used as synonyms. A time series of a random variable has serial dependence if the value at some time in the series is statistically dependent on the value at another time . A series is serially independent if there is no dependence between any pair. If a time series is stationary, then statistical dependence between the pair would imply that there is statistical dependence between all pairs of values at the same lag . See also Autocorrelation matrix Autocorrelation of a formal word Autocorrelation technique Autocorrelator Cochrane–Orcutt estimation (transformation for autocorrelated error terms) Correlation function Correlogram Cross-correlation CUSUM Fluorescence correlation spectroscopy Optical autocorrelation Partial autocorrelation function Phylogenetic autocorrelation (Galton's problem) Pitch detection algorithm Prais–Winsten transformation Scaled correlation Triple correlation Unbiased estimation of standard deviation References Further reading Solomon W. Golomb, and Guang Gong. Signal design for good correlation: for wireless communication, cryptography, and radar. Cambridge University Press, 2005. Klapetek, Petr (2018). Quantitative Data Processing in Scanning Probe Microscopy: SPM Applications for Nanometrology (Second ed.). Elsevier. pp. 108–112 . Signal processing Time domain analysis
Autocorrelation
[ "Technology", "Engineering" ]
4,295
[ "Telecommunications engineering", "Computer engineering", "Signal processing" ]
2,726
https://en.wikipedia.org/wiki/Atlas%20Autocode
Atlas Autocode (AA) is a programming language developed around 1963 at the University of Manchester. A variant of the language ALGOL, it was developed by Tony Brooker and Derrick Morris for the Atlas computer. The initial AA and AB compilers were written by Jeff Rohl and Tony Brooker using the Brooker-Morris Compiler-compiler, with a later hand-coded non-CC implementation (ABC) by Jeff Rohl. The word Autocode was basically an early term for programming language. Different autocodes could vary greatly. Features AA was a block structured language that featured explicitly typed variables, subroutines, and functions. It omitted some ALGOL features such as passing parameters by name, which in ALGOL 60 means passing the memory address of a short subroutine (a thunk) to recalculate a parameter each time it is mentioned. The AA compiler could generate range-checking for array accesses, and allowed an array to have dimensions that were determined at runtime, i.e., an array could be declared as integer array Thing (i:j), where i and j were calculated values. AA high-level routines could include machine code, either to make an inner loop more efficient or to effect some operation which otherwise cannot be done easily. AA included a complex data type to represent complex numbers, partly because of pressure from the electrical engineering department, as complex numbers are used to represent the behavior of alternating current. The imaginary unit square root of -1 was represented by i, which was treated as a fixed complex constant = i. The complex data type was dropped when Atlas Autocode later evolved into the language Edinburgh IMP. IMP was an extension of AA and was used to write the Edinburgh Multiple Access System (EMAS) operating system. In addition to being notable as the progenitor of IMP and EMAS, AA is noted for having had many of the features of the original Compiler Compiler. A variant of the AA compiler included run-time support for a top-down recursive descent parser. The style of parser used in the Compiler Compiler was in use continuously at Edinburgh from the 60's until almost the year 2000. Other Autocodes were developed for the Titan computer, a prototype Atlas 2 at Cambridge, and the Ferranti Mercury. Syntax Atlas Autocode's syntax was largely similar to ALGOL, though it was influenced by the output device which the author had available, a Friden Flexowriter. Thus, it allowed symbols like ½ for .5 and the superscript 2 for to the power of 2. The Flexowriter supported overstriking and thus, AA did also: up to three characters could be overstruck as a single symbol. For example, the character set had no ↑ symbol, so exponentiation was an overstrike of | and *. The aforementioned underlining of reserved words (keywords) could also be done using overstriking. The language is described in detail in the Atlas Autocode Reference Manual. Other Flexowriter characters that were found a use in AA were: α in floating-point numbers, e.g., 3.56α-7 for modern 3.56e-7 ; β to mean the second half of a 48-bit Atlas memory word; π for the mathematical constant pi. When AA was ported to the English Electric KDF9 computer, the character set was changed to International Organization for Standardization (ISO). That compiler has been recovered from an old paper tape by the Edinburgh Computer History Project and is available online, as is a high-quality scan of the original Edinburgh version of the Atlas Autocode manual. Keywords in AA were distinguishable from other text by being underlined, which was implemented via overstrike in the Flexowriter (compare to bold in ALGOL). There were also two stropping regimes. First, there was an "uppercasedelimiters" mode where all uppercase letters (outside strings) were treated as underlined lowercase. Second, in some versions (but not in the original Atlas version), it was possible to strop keywords by placing a "%" sign in front of them, for example the keyword endofprogramme could be typed as %end %of %programme or %endofprogramme. This significantly reduced typing, due to only needing one character, rather than overstriking the whole keyword. As in ALGOL, there were no reserved words in the language as keywords were identified by underlining (or stropping), not by recognising reserved character sequences. In the statement if token=if then result = token, there is both a keyword if and a variable named if. As in ALGOL, AA allowed spaces in variable names, such as integer previous value. Spaces were not significant and were removed before parsing in a trivial pre-lexing stage called "line reconstruction". What the compiler would see in the above example would be "iftoken=ifthenresult=token". Spaces were possible due partly to keywords being distinguished in other ways, and partly because the source was processed by scannerless parsing, without a separate lexing phase, which allowed the lexical syntax to be context-sensitive. The syntax for expressions let the multiplication operator be omitted, e.g., 3a was treated as 3*a, and a(i+j) was treated as a*(i+j) if a was not an array. In ambiguous uses, the longest possible name was taken (maximal munch), for example ab was not treated as a*b, whether or not a and b had been declared. References External links The main features of Atlas Autocode, By R. A. Brooker, J. S. Rohl, and S. R. Clark The Atlas Autocode Mini-Manual by W. F. Lunnon, G. Riding (July 1965) Atlas Autocode Reference Manual by R.A. Brooker, J.S.Rohl (March 1965) Mercury Autocode, Atlas Autocode and some Associated Matters. by Vic Forrington (Jan 2014) Flowcharts for Atlas Autocode compiler on KDF9. Ferranti History of computing in the United Kingdom Structured programming languages
Atlas Autocode
[ "Technology" ]
1,292
[ "History of computing", "History of computing in the United Kingdom" ]
2,752
https://en.wikipedia.org/wiki/Aspartame
Aspartame is an artificial non-saccharide sweetener 200 times sweeter than sucrose and is commonly used as a sugar substitute in foods and beverages. It is a methyl ester of the aspartic acid/phenylalanine dipeptide with brand names NutraSweet, Equal, and Canderel. Discovered in 1965, aspartame was approved by the US Food and Drug Administration (FDA) in 1974 and re-approved in 1981 after its initial approval was briefly revoked. Aspartame is one of the most studied food additives in the human food supply. Reviews by over 100 governmental regulatory bodies found the ingredient safe for consumption at the normal acceptable daily intake limit. Uses Aspartame is about 180 to 200 times sweeter than sucrose (table sugar). Due to this property, even though aspartame produces roughly the same energy per gram when metabolized as sucrose does, , the quantity of aspartame needed to produce the same sweetness is so small that its caloric contribution is negligible. The sweetness of aspartame lasts longer than that of sucrose, so it is often blended with other artificial sweeteners such as acesulfame potassium to produce an overall taste more like that of sugar. Like many other peptides, aspartame may hydrolyze (break down) into its constituent amino acids under conditions of elevated temperature or high pH. This makes aspartame undesirable as a baking sweetener and prone to degradation in products hosting a high pH, as required for a long shelf life. The stability of aspartame under heating can be improved to some extent by encasing it in fats or in maltodextrin. The stability when dissolved in water depends markedly on pH. At room temperature, it is most stable at pH 4.3, where its half-life is nearly 300 days. At pH 7, however, its half-life is only a few days. Most soft-drinks have a pH between 3 and 5, where aspartame is reasonably stable. In products that may require a longer shelf life, such as syrups for fountain beverages, aspartame is sometimes blended with a more stable sweetener, such as saccharin. Descriptive analyses of solutions containing aspartame report a sweet aftertaste as well as bitter and off-flavor aftertastes. Acceptable levels of consumption The acceptable daily intake (ADI) value for food additives, including aspartame, is defined as the "amount of a food additive, expressed on a body weight basis, that can be ingested daily over a lifetime without appreciable health risk". The Joint FAO/WHO Expert Committee on Food Additives (JECFA) and the European Commission's Scientific Committee on Food (later becoming EFSA) have determined this value is 40 mg/kg of body weight per day for aspartame, while the FDA has set its ADI for aspartame at 50 mg/kg per day an amount equated to consuming 75 packets of commercial aspartame sweetener per day to be within a safe upper limit. The primary source for exposure to aspartame in the US is diet soft drinks, though it can be consumed in other products, such as pharmaceutical preparations, fruit drinks, and chewing gum among others in smaller quantities. A can of diet soda contains of aspartame, and, for a adult, it takes approximately 21 cans of diet soda daily to consume the of aspartame that would surpass the FDA's 50 mg/kg of body weight ADI of aspartame from diet soda alone. Reviews have analyzed studies which have looked at the consumption of aspartame in countries worldwide, including the US, countries in Europe, and Australia, among others. These reviews have found that even the high levels of intake of aspartame, studied across multiple countries and different methods of measuring aspartame consumption, are well below the ADI for safe consumption of aspartame. Reviews have also found that populations that are believed to be especially high consumers of aspartame, such as children and diabetics, are below the ADI for safe consumption, even considering extreme worst-case scenario calculations of consumption. In a report released on 10 December 2013, the EFSA said that, after an extensive examination of evidence, it ruled out the "potential risk of aspartame causing damage to genes and inducing cancer" and deemed the amount found in diet sodas safe to consume. Safety and health effects The safety of aspartame has been studied since its discovery, and it is a rigorously tested food ingredient. Aspartame has been deemed safe for human consumption by over 100 regulatory agencies in their respective countries, including the US Food and Drug Administration (FDA), UK Food Standards Agency, the European Food Safety Authority (EFSA), Health Canada, and Food Standards Australia New Zealand. Metabolism and body weight reviews of clinical trials showed that using aspartame (or other non-nutritive sweeteners) in place of sugar reduces calorie intake and body weight in adults and children. A 2017 review of metabolic effects by consuming aspartame found that it did not affect blood glucose, insulin, total cholesterol, triglycerides, calorie intake, or body weight. While high-density lipoprotein levels were higher compared to control, they were lower compared to sucrose. In 2023, the World Health Organization recommended against the use of common non-sugar sweeteners (NSS), including aspartame, to control body weight or lower the risk of non-communicable diseases, stating: "The recommendation is based on the findings of a systematic review of the available evidence which suggests that use of NSS does not confer any long-term benefit in reducing body fat in adults or children. Results of the review also suggest that there may be potential undesirable effects from long-term use of NSS, such as an increased risk of type 2 diabetes, cardiovascular diseases, and mortality in adults." Phenylalanine High levels of the naturally occurring essential amino acid phenylalanine are a health hazard to those born with phenylketonuria (PKU), a rare inherited disease that prevents phenylalanine from being properly metabolized. Because aspartame contains phenylalanine, foods containing aspartame sold in the US must state: "Phenylketonurics: Contains Phenylalanine" on product labels. In the UK, foods that contain aspartame are required by the Food Standards Agency to list the substance as an ingredient, with the warning "Contains a source of phenylalanine". Manufacturers are also required to print "with sweetener(s)" on the label close to the main product name on foods that contain "sweeteners such as aspartame" or "with sugar and sweetener(s)" on "foods that contain both sugar and sweetener". In Canada, foods that contain aspartame are required to list aspartame among the ingredients, include the amount of aspartame per serving, and state that the product contains phenylalanine. Phenylalanine is one of the essential amino acids and is required for normal growth and maintenance of life. Concerns about the safety of phenylalanine from aspartame for those without phenylketonuria center largely on hypothetical changes in neurotransmitter levels as well as ratios of neurotransmitters to each other in the blood and brain that could lead to neurological symptoms. Reviews of the literature have found no consistent findings to support such concerns, and, while high doses of aspartame consumption may have some biochemical effects, these effects are not seen in toxicity studies to suggest aspartame can adversely affect neuronal function. As with methanol and aspartic acid, common foods in the typical diet, such as milk, meat, and fruits, will lead to ingestion of significantly higher amounts of phenylalanine than would be expected from aspartame consumption. Cancer , regulatory agencies, including the FDA and EFSA, and the US National Cancer Institute, have concluded that consuming aspartame is safe in amounts within acceptable daily intake levels and does not cause cancer. These conclusions are based on various sources of evidence, such as reviews and epidemiological studies finding no association between aspartame and cancer. In July 2023, scientists for the International Agency for Research on Cancer (IARC) concluded that there was "limited evidence" for aspartame causing cancer in humans, classifying the sweetener as Group 2B (possibly carcinogenic). The lead investigator of the IARC report stated that the classification "shouldn't really be taken as a direct statement that indicates that there is a known cancer hazard from consuming aspartame. This is really more of a call to the research community to try to better clarify and understand the carcinogenic hazard that may or may not be posed by aspartame consumption." The Joint FAO/WHO Expert Committee on Food Additives (JECFA) added that the limited cancer assessment indicated no reason to change the recommended acceptable daily intake level of 40 mg per kg of body weight per day, reaffirming the safety of consuming aspartame within this limit. The FDA responded to the report by stating: Neurological safety Reviews found no evidence that low doses of aspartame had neurotoxic effects. A 2019 policy statement by the American Academy of Pediatrics concluded that there were no safety concerns about aspartame in fetal or childhood development or as a factor in attention deficit hyperactivity disorder. Headaches Reviews have found little evidence to indicate that aspartame induces headaches, although certain subsets of consumers may be sensitive to it. Water quality Aspartame passes through wastewater treatment plants mainly unchanged. Mechanism of action The perceived sweetness of aspartame (and other sweet substances like acesulfame potassium) in humans is due to its binding of the heterodimer G protein-coupled receptor formed by the proteins TAS1R2 and TAS1R3. Rodents do not experience aspartame as sweet-tasting, due to differences in their taste receptors. Metabolites Aspartame is rapidly hydrolyzed in the small intestine by digestive enzymes which break aspartame down into methanol, phenylalanine, aspartic acid, and further metabolites, such as formaldehyde and formic acid. Due to its rapid and complete metabolism, aspartame is not found in circulating blood, even following ingestion of high doses over 200 mg/kg. Aspartic acid Aspartic acid (aspartate) is one of the most common amino acids in the typical diet. As with methanol and phenylalanine, intake of aspartic acid from aspartame is less than would be expected from other dietary sources. At the 90th percentile of intake, aspartame provides only between 1% and 2% of the daily intake of aspartic acid. Methanol The methanol produced by aspartame metabolism is unlikely to be a safety concern for several reasons. The amount of methanol produced from aspartame-sweetened foods and beverages is likely to be less than that from food sources already in diets. With regard to formaldehyde, it is rapidly converted in the body, and the amounts of formaldehyde from the metabolism of aspartame are trivial when compared to the amounts produced routinely by the human body and from other foods and drugs. At the highest expected human doses of consumption of aspartame, there are no increased blood levels of methanol or formic acid, and ingesting aspartame at the 90th percentile of intake would produce 25 times less methanol than what would be considered toxic. Chemistry Aspartame is a methyl ester of the dipeptide of the natural amino acids L-aspartic acid and L-phenylalanine. Under strongly acidic or alkaline conditions, aspartame may generate methanol by hydrolysis. Under more severe conditions, the peptide bonds are also hydrolyzed, resulting in free amino acids. Two approaches to synthesis are used commercially. In the chemical synthesis, the two carboxyl groups of aspartic acid are joined into an anhydride, and the amino group is protected with a formyl group as the formamide, by treatment of aspartic acid with a mixture of formic acid and acetic anhydride. Phenylalanine is converted to its methyl ester and combined with the N-formyl aspartic anhydride; then the protecting group is removed from aspartic nitrogen by acid hydrolysis. The drawback of this technique is that a byproduct, the bitter-tasting β-form, is produced when the wrong carboxyl group from aspartic acid anhydride links to phenylalanine, with desired and undesired isomer forming in a 4:1 ratio. A process using an enzyme from Bacillus thermoproteolyticus to catalyze the condensation of the chemically altered amino acids will produce high yields without the β-form byproduct. A variant of this method, which has not been used commercially, uses unmodified aspartic acid but produces low yields. Methods for directly producing aspartyl-phenylalanine by enzymatic means, followed by chemical methylation, have also been tried but not scaled for industrial production. History Aspartame was discovered by accident in 1965 by James M. Schlatter, a chemist working for G.D. Searle & Company in Skokie, Illinois. Schlatter had synthesized aspartame as an intermediate step in generating a tetrapeptide of the hormone gastrin, for use in assessing an anti-ulcer drug candidate. He discovered its sweet taste when he licked his finger, which had become contaminated with aspartame, to lift up a piece of paper. Torunn Atteraas Garin participated in the development of aspartame as an artificial sweetener. In 1975, prompted by issues regarding Flagyl and Aldactone, an FDA task force team reviewed 25 studies submitted by the manufacturer, including 11 on aspartame. The team reported "serious deficiencies in Searle's operations and practices". The FDA sought to authenticate 15 of the submitted studies against the supporting data. In 1979, the Center for Food Safety and Applied Nutrition (CFSAN) concluded, since many problems with the aspartame studies were minor and did not affect the conclusions, the studies could be used to assess aspartame's safety. In 1980, the FDA convened a Public Board of Inquiry (PBOI) consisting of independent advisors charged with examining the purported relationship between aspartame and brain cancer. The PBOI concluded aspartame does not cause brain damage, but it recommended against approving aspartame at that time, citing unanswered questions about cancer in laboratory rats. In 1983, the FDA approved aspartame for use in carbonated beverages and for use in other beverages, baked goods, and confections in 1993. In 1996, the FDA removed all restrictions from aspartame, allowing it to be used in all foods. As of May 2023, the FDA stated that it regards aspartame as a safe food ingredient when consumed within the acceptable daily intake level of 50 mg per kg of body weight per day. Several European Union countries approved aspartame in the 1980s, with EU-wide approval in 1994. The Scientific Committee on Food (SCF) reviewed subsequent safety studies and reaffirmed the approval in 2002. The European Food Safety Authority (EFSA) reported in 2006 that the previously established Acceptable daily intake (ADI) was appropriate, after reviewing yet another set of studies. Compendial status British Pharmacopoeia United States Pharmacopeia Commercial uses Under the brand names Equal, NutraSweet, and Canderel, aspartame is an ingredient in approximately 6,000 consumer foods and beverages sold worldwide, including (but not limited to) diet sodas and other soft drinks, instant breakfasts, breath mints, cereals, sugar-free chewing gum, cocoa mixes, frozen desserts, gelatin desserts, juices, laxatives, chewable vitamin supplements, milk drinks, pharmaceutical drugs and supplements, shake mixes, tabletop sweeteners, teas, instant coffees, topping mixes, wine coolers, and yogurt. It is provided as a table condiment in some countries. Aspartame is less suitable for baking than other sweeteners because it breaks down when heated and loses much of its sweetness. NutraSweet Company In 1985, Monsanto bought G.D.Searle, and the aspartame business became a separate Monsanto subsidiary, NutraSweet. In March 2000, Monsanto sold it to J.W. Childs Associates Equity Partners II L.P. European use patents on aspartame expired beginning in 1987, with the US patent following suit in 1992. Ajinomoto In 2004, the market for aspartame, in which Ajinomoto, the world's largest aspartame manufacturer, had a 40% share, was a year, and consumption of the product was rising by 2% a year. Ajinomoto acquired its aspartame business in 2000 from Monsanto for $67 million (equivalent to $ in ). In 2007, Asda was the first British supermarket chain to remove all artificial flavourings and colours in its store brand foods. In 2008, Ajinomoto sued Asda, part of Walmart, for a malicious falsehood action concerning its aspartame product when the substance was listed as excluded from the chain's product line, along with other "nasties". In July 2009, a British court ruled in favor of Asda. In June 2010, an appeals court reversed the decision, allowing Ajinomoto to pursue a case against Asda to protect aspartame's reputation. Asda said that it would continue to use the term "no nasties" on its own-label products, but the suit was settled in 2011 with Asda choosing to remove references to aspartame from its packaging. In November 2009, Ajinomoto announced a new brand name for its aspartame sweetener — AminoSweet. Holland Sweetener Company A joint venture of DSM and Tosoh, the Holland Sweetener Company manufactured aspartame using the enzymatic process developed by Toyo Soda (Tosoh) and sold as the brand Sanecta. Additionally, they developed a combination aspartame-acesulfame salt under the brand name Twinsweet. They left the sweetener industry in 2006, because "global aspartame markets are facing structural oversupply, which has caused worldwide strong price erosion over the last five years", making the business "persistently unprofitable". Competing products Because sucralose, unlike aspartame, retains its sweetness after being heated, and has at least twice the shelf life of aspartame, it has become more popular as an ingredient. This, along with differences in marketing and changing consumer preferences, caused aspartame to lose market share to sucralose. In 2004, aspartame traded at about and sucralose, which is roughly three times sweeter by weight, at around . See also Sugar substitute References External links Amino acid derivatives Aromatic compounds Beta-Amino acids Butyramides Dipeptides Carboxylate esters Sugar substitutes Methyl esters E-number additives IARC Group 2B carcinogens
Aspartame
[ "Chemistry" ]
4,207
[ "Organic compounds", "Aromatic compounds" ]
2,756
https://en.wikipedia.org/wiki/Asexual%20reproduction
Asexual reproduction is a type of reproduction that does not involve the fusion of gametes or change in the number of chromosomes. The offspring that arise by asexual reproduction from either unicellular or multicellular organisms inherit the full set of genes of their single parent and thus the newly created individual is genetically and physically similar to the parent or an exact clone of the parent. Asexual reproduction is the primary form of reproduction for single-celled organisms such as archaea and bacteria. Many eukaryotic organisms including plants, animals, and fungi can also reproduce asexually. In vertebrates, the most common form of asexual reproduction is parthenogenesis, which is typically used as an alternative to sexual reproduction in times when reproductive opportunities are limited. Some monitor lizards, including Komodo dragons, can reproduce asexually. While all prokaryotes reproduce without the formation and fusion of gametes, mechanisms for lateral gene transfer such as conjugation, transformation and transduction can be likened to sexual reproduction in the sense of genetic recombination in meiosis. Types of asexual reproduction Fission Prokaryotes (Archaea and Bacteria) reproduce asexually through binary fission, in which the parent organism divides in two to produce two genetically identical daughter organisms. Eukaryotes (such as protists and unicellular fungi) may reproduce in a functionally similar manner by mitosis; most of these are also capable of sexual reproduction. Multiple fission at the cellular level occurs in many protists, e.g. sporozoans and algae. The nucleus of the parent cell divides several times by mitosis, producing several nuclei. The cytoplasm then separates, creating multiple daughter cells. In apicomplexans, multiple fission, or schizogony appears either as merogony, sporogony or gametogony. Merogony results in merozoites, which are multiple daughter cells, that originate within the same cell membrane, sporogony results in sporozoites, and gametogony results in microgametes. Budding Some cells divide by budding (for example baker's yeast), resulting in a "mother" and a "daughter" cell that is initially smaller than the parent. Budding is also known on a multicellular level; an animal example is the hydra, which reproduces by budding. The buds grow into fully matured individuals which eventually break away from the parent organism. Internal budding is a process of asexual reproduction, favoured by parasites such as Toxoplasma gondii. It involves an unusual process in which two (endodyogeny) or more (endopolygeny) daughter cells are produced inside a mother cell, which is then consumed by the offspring prior to their separation. Also, budding (external or internal) occurs in some worms like Taenia or Echinococcus; these worms produce cysts and then produce (invaginated or evaginated) protoscolex with budding. Vegetative propagation Vegetative propagation is a type of asexual reproduction found in plants where new individuals are formed without the production of seeds or spores and thus without syngamy or meiosis. Examples of vegetative reproduction include the formation of miniaturized plants called plantlets on specialized leaves, for example in kalanchoe (Bryophyllum daigremontianum) and many produce new plants from rhizomes or stolon (for example in strawberry). Some plants reproduce by forming bulbs or tubers, for example tulip bulbs and Dahlia tubers. In these examples, all the individuals are clones, and the clonal population may cover a large area. Spore formation Many multicellular organisms produce spores during their biological life cycle in a process called sporogenesis. Exceptions are animals and some protists, which undergo meiosis immediately followed by fertilization. Plants and many algae on the other hand undergo sporic meiosis where meiosis leads to the formation of haploid spores rather than gametes. These spores grow into multicellular individuals called gametophytes, without a fertilization event. These haploid individuals produce gametes through mitosis. Meiosis and gamete formation therefore occur in separate multicellular generations or "phases" of the life cycle, referred to as alternation of generations. Since sexual reproduction is often more narrowly defined as the fusion of gametes (fertilization), spore formation in plant sporophytes and algae might be considered a form of asexual reproduction (agamogenesis) despite being the result of meiosis and undergoing a reduction in ploidy. However, both events (spore formation and fertilization) are necessary to complete sexual reproduction in the plant life cycle. Fungi and some algae can also utilize true asexual spore formation, which involves mitosis giving rise to reproductive cells called mitospores that develop into a new organism after dispersal. This method of reproduction is found for example in conidial fungi and the red algae Polysiphonia, and involves sporogenesis without meiosis. Thus the chromosome number of the spore cell is the same as that of the parent producing the spores. However, mitotic sporogenesis is an exception and most spores, such as those of plants and many algae, are produced by meiosis. Fragmentation Fragmentation is a form of asexual reproduction where a new organism grows from a fragment of the parent. Each fragment develops into a mature, fully grown individual. Fragmentation is seen in many organisms. Animals that reproduce asexually include planarians, many annelid worms including polychaetes and some oligochaetes, turbellarians and sea stars. Many fungi and plants reproduce asexually. Some plants have specialized structures for reproduction via fragmentation, such as gemmae in mosses and liverworts. Most lichens, which are a symbiotic union of a fungus and photosynthetic algae or cyanobacteria, reproduce through fragmentation to ensure that new individuals contain both symbionts. These fragments can take the form of soredia, dust-like particles consisting of fungal hyphae wrapped around photobiont cells. Clonal Fragmentation in multicellular or colonial organisms is a form of asexual reproduction or cloning where an organism is split into fragments. Each of these fragments develop into mature, fully grown individuals that are clones of the original organism. In echinoderms, this method of reproduction is usually known as fissiparity. Due to many environmental and epigenetic differences, clones originating from the same ancestor might actually be genetically and epigenetically different. Agamogenesis Agamogenesis is any form of reproduction that does not involve a male gamete. Examples are parthenogenesis and apomixis. Parthenogenesis Parthenogenesis is a form of agamogenesis in which an unfertilized egg develops into a new individual. It has been documented in over 2,000 species. Parthenogenesis occurs in the wild in many invertebrates (e.g. water fleas, rotifers, aphids, stick insects, some ants, bees and parasitic wasps) and vertebrates (mostly reptiles, amphibians, and fish). It has also been documented in domestic birds and in genetically altered lab mice. Plants can engage in parthenogenesis as well through a process called apomixis. However this process is considered by many to not be an independent reproduction method, but instead a breakdown of the mechanisms behind sexual reproduction. Parthenogenetic organisms can be split into two main categories: facultative and obligate. Facultative parthenogenesis In facultative parthenogenesis, females can reproduce both sexually and asexually. Because of the many advantages of sexual reproduction, most facultative parthenotes only reproduce asexually when forced to. This typically occurs in instances when finding a mate becomes difficult. For example, female zebra sharks will reproduce asexually if they are unable to find a mate in their ocean habitats. Parthenogenesis was previously believed to rarely occur in vertebrates, and only be possible in very small animals. However, it has been discovered in many more species in recent years. Today, the largest species that has been documented reproducing parthenogenically is the Komodo dragon at 10 feet long and over 300 pounds. Heterogony is a form of facultative parthenogenesis where females alternate between sexual and asexual reproduction at regular intervals (see Alternation between sexual and asexual reproduction). Aphids are one group of organism that engages in this type of reproduction. They use asexual reproduction to reproduce quickly and create winged offspring that can colonize new plants and reproduce sexually in the fall to lay eggs for the next season. However, some aphid species are obligate parthenotes. Obligate parthenogenesis In obligate parthenogenesis, females only reproduce asexually. One example of this is the desert grassland whiptail lizard, a hybrid of two other species. Typically hybrids are infertile but through parthenogenesis this species has been able to develop stable populations. Gynogenesis is a form of obligate parthenogenesis where a sperm cell is used to initiate reproduction. However, the sperm's genes never get incorporated into the egg cell. The best known example of this is the Amazon molly. Because they are obligate parthenotes, there are no males in their species so they depend on males from a closely related species (the Sailfin molly) for sperm. Apomixis and nucellar embryony Apomixis in plants is the formation of a new sporophyte without fertilization. It is important in ferns and in flowering plants, but is very rare in other seed plants. In flowering plants, the term "apomixis" is now most often used for agamospermy, the formation of seeds without fertilization, but was once used to include vegetative reproduction. An example of an apomictic plant would be the triploid European dandelion. Apomixis mainly occurs in two forms: In gametophytic apomixis, the embryo arises from an unfertilized egg within a diploid embryo sac that was formed without completing meiosis. In nucellar embryony, the embryo is formed from the diploid nucellus tissue surrounding the embryo sac. Nucellar embryony occurs in some citrus seeds. Male apomixis can occur in rare cases, such as in the Saharan Cypress Cupressus dupreziana, where the genetic material of the embryo is derived entirely from pollen. Androgenesis Androgenesis occurs when a zygote is produced with only paternal nuclear genes. During standard sexual reproduction, one female and one male parent each produce haploid gametes (such as a sperm or egg cell, each containing only a single set of chromosomes), which recombine to create offspring with genetic material from both parents. However, in androgenesis, there is no recombination of maternal and paternal chromosomes, and only the paternal chromosomes are passed down to the offspring (the inverse of this is gynogenesis, where only the maternal chromosomes are inherited, which is more common than androgenesis). The offspring produced in androgenesis will still have maternally inherited mitochondria, as is the case with most sexually reproducing species. Androgenesis occurs in nature in many invertebrates (for example, clams, stick insects, some ants, bees, flies and parasitic wasps) and vertebrates (mainly amphibians and fish). The androgenesis has also been seen in genetically modified laboratory mice. One of two things can occur to produce offspring with exclusively paternal genetic material: the maternal nuclear genome can be eliminated from the zygote, or the female can produce an egg with no nucleus, resulting in an embryo developing with only the genome of the male gamete. Male apomixis Other type of androgenesis is the male apomixis or paternal apomixis is a reproductive process in which a plant develops from a sperm cell (male gamete) without the participation of a female cell (ovum). In this process, the zygote is formed solely with genetic material from the father, resulting in offspring genetically identical to the male organism. This has been noted in many plants like Nicotiana, Capsicum frutescens, Cicer arietinum, Poa arachnifera, Solanum verrucosum, Phaeophyceae, Pripsacum dactyloides, Zea mays, and occurs as the regular reproductive method in Cupressus dupreziana. This contrasts with the more common apomixis, where development occurs without fertilization, but with genetic material only from the mother. There are also clonal species that reproduce through vegetative reproduction like Lomatia tasmanica and Pando, where the genetic material is exclusively male. Other species where androgenesis has been observed naturally are the stick insects Bacillus rossius and Bassillus Grandii, the little fire ant Wasmannia auropunctata, Vollenhovia emeryi, Paratrechina longicornis, occasionally in Apis mellifera, the Hypseleotris carp gudgeons, the parasitoid Venturia canescens, and occasionally in fruit flies Drosophila melanogaster carrying a specific mutant allele. It has also been induced in many crops and fish via irradiation of an egg cell to destroy the maternal nuclear genome. Obligate androgenesis Obligate androgenesis is the process in which males are capable of producing both eggs and sperm, however, the eggs have no genetic contribution and the offspring come only from the sperm, which allows these individuals to self-fertilize and produce clonal offspring without the need for females. They are also capable of interbreeding with sexual and other androgenetic lineages in a phenomenon known as "egg parasitism." This method of reproduction has been found in several species of the clam genus Corbicula, many plants like, Cupressus dupreziana, Lomatia tasmanica, Pando and recently in the fish Squalius alburnoides. Other species where androgenesis has been observed naturally are the stick insects Bacillus rossius and Bassillus Grandii, the little fire ant Wasmannia auropunctata, Vollenhovia emeryi, Paratrechina longicornis, occasionally in Apis mellifera, the Hypseleotris carp gudgeons, the parasitoid Venturia canescens, and occasionally in fruit flies Drosophila melanogaster carrying a specific mutant allele. It has also been induced in many crops and fish via irradiation of an egg cell to destroy the maternal nuclear genome. Alternation between sexual and asexual reproduction Some species can alternate between sexual and asexual strategies, an ability known as heterogamy, depending on many conditions. Alternation is observed in several rotifer species (cyclical parthenogenesis e.g. in Brachionus species) and a few types of insects. One example of this is aphids which can engage in heterogony. In this system, females are born pregnant and produce only female offspring. This cycle allows them to reproduce very quickly. However, most species reproduce sexually once a year. This switch is triggered by environmental changes in the fall and causes females to develop eggs instead of embryos. This dynamic reproductive cycle allows them to produce specialized offspring with polyphenism, a type of polymorphism where different phenotypes have evolved to carry out specific tasks. The cape bee Apis mellifera subsp. capensis can reproduce asexually through a process called thelytoky. The freshwater crustacean Daphnia reproduces by parthenogenesis in the spring to rapidly populate ponds, then switches to sexual reproduction as the intensity of competition and predation increases. Monogonont rotifers of the genus Brachionus reproduce via cyclical parthenogenesis: at low population densities females produce asexually and at higher densities a chemical cue accumulates and induces the transition to sexual reproduction. Many protists and fungi alternate between sexual and asexual reproduction. A few species of amphibians, reptiles, and birds have a similar ability. The slime mold Dictyostelium undergoes binary fission (mitosis) as single-celled amoebae under favorable conditions. However, when conditions turn unfavorable, the cells aggregate and follow one of two different developmental pathways, depending on conditions. In the social pathway, they form a multi-cellular slug which then forms a fruiting body with asexually generated spores. In the sexual pathway, two cells fuse to form a giant cell that develops into a large cyst. When this macrocyst germinates, it releases hundreds of amoebic cells that are the product of meiotic recombination between the original two cells. The hyphae of the common mold (Rhizopus) are capable of producing both mitotic as well as meiotic spores. Many algae similarly switch between sexual and asexual reproduction. A number of plants use both sexual and asexual means to produce new plants, some species alter their primary modes of reproduction from sexual to asexual under varying environmental conditions. Inheritance in asexual species In the rotifer Brachionus calyciflorus asexual reproduction (obligate parthenogenesis) can be inherited by a recessive allele, which leads to loss of sexual reproduction in homozygous offspring. Inheritance of asexual reproduction by a single recessive locus has also been found in the parasitoid wasp Lysiphlebus fabarum. Examples in animals Asexual reproduction is found in nearly half of the animal phyla. Parthenogenesis occurs in the hammerhead shark and the blacktip shark. In both cases, the sharks had reached sexual maturity in captivity in the absence of males, and in both cases the offspring were shown to be genetically identical to the mothers. The New Mexico whiptail is another example. Some reptiles use the ZW sex-determination system, which produces either males (with ZZ sex chromosomes) or females (with ZW or WW sex chromosomes). Until 2010, it was thought that the ZW chromosome system used by reptiles was incapable of producing viable WW offspring, but a (ZW) female boa constrictor was discovered to have produced viable female offspring with WW chromosomes. The female boa could have chosen any number of male partners (and had successfully in the past) but on this occasion she reproduced asexually, creating 22 female babies with WW sex-chromosomes. Polyembryony is a widespread form of asexual reproduction in animals, whereby the fertilized egg or a later stage of embryonic development splits to form genetically identical clones. Within animals, this phenomenon has been best studied in the parasitic Hymenoptera. In the nine-banded armadillos, this process is obligatory and usually gives rise to genetically identical quadruplets. In other mammals, monozygotic twinning has no apparent genetic basis, though its occurrence is common. There are at least 10 million identical human twins and triplets in the world today. Bdelloid rotifers reproduce exclusively asexually, and all individuals in the class Bdelloidea are females. Asexuality evolved in these animals millions of years ago and has persisted since. There is evidence to suggest that asexual reproduction has allowed the animals to evolve new proteins through the Meselson effect that have allowed them to survive better in periods of dehydration. Bdelloid rotifers are extraordinarily resistant to damage from ionizing radiation due to the same DNA-preserving adaptations used to survive dormancy. These adaptations include an extremely efficient mechanism for repairing DNA double-strand breaks. This repair mechanism was studied in two Bdelloidea species, Adineta vaga, and Philodina roseola. and appears to involve mitotic recombination between homologous DNA regions within each species. Molecular evidence strongly suggests that several species of the stick insect genus Timema have used only asexual (parthenogenetic) reproduction for millions of years, the longest period known for any insect. Similar findings suggest that the mite species Oppiella nova may have reproduced entirely asexually for millions of years. In the grass thrips genus Aptinothrips there have been several transitions to asexuality, likely due to different causes. Adaptive significance of asexual reproduction A complete lack of sexual reproduction is relatively rare among multicellular organisms, particularly animals. It is not entirely understood why the ability to reproduce sexually is so common among them. Current hypotheses suggest that asexual reproduction may have short term benefits when rapid population growth is important or in stable environments, while sexual reproduction offers a net advantage by allowing more rapid generation of genetic diversity, allowing adaptation to changing environments. Developmental constraints may underlie why few animals have relinquished sexual reproduction completely in their life-cycles. Almost all asexual modes of reproduction maintain meiosis either in a modified form or as an alternative pathway. Facultatively apomictic plants increase frequencies of sexuality relative to apomixis after abiotic stress. Another constraint on switching from sexual to asexual reproduction would be the concomitant loss of meiosis and the protective recombinational repair of DNA damage afforded as one function of meiosis. See also Alternation of generations Self-fertilization Bacterial conjugation Biological life cycle Biological reproduction, also simply reproduction Cloning Hermaphrodite Plant reproduction Sex References Further reading External links Asexual reproduction Intestinal Protozoa
Asexual reproduction
[ "Biology" ]
4,596
[ "Behavior", "Asexual reproduction", "Reproduction" ]
2,761
https://en.wikipedia.org/wiki/Alkene
In organic chemistry, an alkene, or olefin, is a hydrocarbon containing a carbon–carbon double bond. The double bond may be internal or in the terminal position. Terminal alkenes are also known as α-olefins. The International Union of Pure and Applied Chemistry (IUPAC) recommends using the name "alkene" only for acyclic hydrocarbons with just one double bond; alkadiene, alkatriene, etc., or polyene for acyclic hydrocarbons with two or more double bonds; cycloalkene, cycloalkadiene, etc. for cyclic ones; and "olefin" for the general class – cyclic or acyclic, with one or more double bonds. Acyclic alkenes, with only one double bond and no other functional groups (also known as mono-enes) form a homologous series of hydrocarbons with the general formula with n being a >1 natural number (which is two hydrogens less than the corresponding alkane). When n is four or more, isomers are possible, distinguished by the position and conformation of the double bond. Alkenes are generally colorless non-polar compounds, somewhat similar to alkanes but more reactive. The first few members of the series are gases or liquids at room temperature. The simplest alkene, ethylene () (or "ethene" in the IUPAC nomenclature) is the organic compound produced on the largest scale industrially. Aromatic compounds are often drawn as cyclic alkenes, however their structure and properties are sufficiently distinct that they are not classified as alkenes or olefins. Hydrocarbons with two overlapping double bonds () are called allenes—the simplest such compound is itself called allene—and those with three or more overlapping bonds (, , etc.) are called cumulenes. Structural isomerism Alkenes having four or more carbon atoms can form diverse structural isomers. Most alkenes are also isomers of cycloalkanes. Acyclic alkene structural isomers with only one double bond follow: : ethylene only : propylene only : 3 isomers: 1-butene, 2-butene, and isobutylene : 5 isomers: 1-pentene, 2-pentene, 2-methyl-1-butene, 3-methyl-1-butene, 2-methyl-2-butene : 13 isomers: 1-hexene, 2-hexene, 3-hexene, 2-methyl-1-pentene, 3-methyl-1-pentene, 4-methyl-1-pentene, 2-methyl-2-pentene, 3-methyl-2-pentene, 4-methyl-2-pentene, 2,3-dimethyl-1-butene, 3,3-dimethyl-1-butene, 2,3-dimethyl-2-butene, 2-ethyl-1-butene Many of these molecules exhibit cis–trans isomerism. There may also be chiral carbon atoms particularly within the larger molecules (from ). The number of potential isomers increases rapidly with additional carbon atoms. Structure and bonding Bonding A carbon–carbon double bond consists of a sigma bond and a pi bond. This double bond is stronger than a single covalent bond (611 kJ/mol for C=C vs. 347 kJ/mol for C–C), but not twice as strong. Double bonds are shorter than single bonds with an average bond length of 1.33 Å (133 pm) vs 1.53 Å for a typical C-C single bond. Each carbon atom of the double bond uses its three sp2 hybrid orbitals to form sigma bonds to three atoms (the other carbon atom and two hydrogen atoms). The unhybridized 2p atomic orbitals, which lie perpendicular to the plane created by the axes of the three sp2 hybrid orbitals, combine to form the pi bond. This bond lies outside the main C–C axis, with half of the bond on one side of the molecule and a half on the other. With a strength of 65 kcal/mol, the pi bond is significantly weaker than the sigma bond. Rotation about the carbon–carbon double bond is restricted because it incurs an energetic cost to break the alignment of the p orbitals on the two carbon atoms. Consequently cis or trans isomers interconvert so slowly that they can be freely handled at ambient conditions without isomerization. More complex alkenes may be named with the E–Z notation for molecules with three or four different substituents (side groups). For example, of the isomers of butene, the two methyl groups of (Z)-but-2-ene (a.k.a. cis-2-butene) appear on the same side of the double bond, and in (E)-but-2-ene (a.k.a. trans-2-butene) the methyl groups appear on opposite sides. These two isomers of butene have distinct properties. Shape As predicted by the VSEPR model of electron pair repulsion, the molecular geometry of alkenes includes bond angles about each carbon atom in a double bond of about 120°. The angle may vary because of steric strain introduced by nonbonded interactions between functional groups attached to the carbon atoms of the double bond. For example, the C–C–C bond angle in propylene is 123.9°. For bridged alkenes, Bredt's rule states that a double bond cannot occur at the bridgehead of a bridged ring system unless the rings are large enough. Following Fawcett and defining S as the total number of non-bridgehead atoms in the rings, bicyclic systems require S ≥ 7 for stability and tricyclic systems require S ≥ 11. Isomerism In organic chemistry,the prefixes cis- and trans- are used to describe the positions of functional groups attached to carbon atoms joined by a double bond. In Latin, cis and trans mean "on this side of" and "on the other side of" respectively. Therefore, if the functional groups are both on the same side of the carbon chain, the bond is said to have cis- configuration, otherwise (i.e. the functional groups are on the opposite side of the carbon chain), the bond is said to have trans- configuration. For there to be cis- and trans- configurations, there must be a carbon chain, or at least one functional group attached to each carbon is the same for both. E- and Z- configuration can be used instead in a more general case where all four functional groups attached to carbon atoms in a double bond are different. E- and Z- are abbreviations of German words zusammen (together) and entgegen (opposite). In E- and Z-isomerism, each functional group is assigned a priority based on the Cahn–Ingold–Prelog priority rules. If the two groups with higher priority are on the same side of the double bond, the bond is assigned Z- configuration, otherwise (i.e. the two groups with higher priority are on the opposite side of the double bond), the bond is assigned E- configuration. Cis- and trans- configurations do not have a fixed relationship with E- and Z-configurations. Physical properties Many of the physical properties of alkenes and alkanes are similar: they are colorless, nonpolar, and combustible. The physical state depends on molecular mass: like the corresponding saturated hydrocarbons, the simplest alkenes (ethylene, propylene, and butene) are gases at room temperature. Linear alkenes of approximately five to sixteen carbon atoms are liquids, and higher alkenes are waxy solids. The melting point of the solids also increases with increase in molecular mass. Alkenes generally have stronger smells than their corresponding alkanes. Ethylene has a sweet and musty odor. Strained alkenes, in particular, like norbornene and trans-cyclooctene are known to have strong, unpleasant odors, a fact consistent with the stronger π complexes they form with metal ions including copper. Boiling and melting points Below is a list of the boiling and melting points of various alkenes with the corresponding alkane and alkyne analogues. Infrared spectroscopy In the IR spectrum, the stretching/compression of C=C bond gives a peak at 1670–1600 cm−1. The band is weak in symmetrical alkenes. The bending of C=C bond absorbs between 1000 and 650 cm−1 wavelength NMR spectroscopy In 1H NMR spectroscopy, the hydrogen bonded to the carbon adjacent to double bonds will give a δH of 4.5–6.5 ppm. The double bond will also deshield the hydrogen attached to the carbons adjacent to sp2 carbons, and this generates δH=1.6–2. ppm peaks. Cis/trans isomers are distinguishable due to different J-coupling effect. Cis vicinal hydrogens will have coupling constants in the range of 6–14 Hz, whereas the trans will have coupling constants of 11–18 Hz. In their 13C NMR spectra of alkenes, double bonds also deshield the carbons, making them have low field shift. C=C double bonds usually have chemical shift of about 100–170 ppm. Combustion Like most other hydrocarbons, alkenes combust to give carbon dioxide and water. The combustion of alkenes release less energy than burning same molarity of saturated ones with same number of carbons. This trend can be clearly seen in the list of standard enthalpy of combustion of hydrocarbons. Reactions Alkenes are relatively stable compounds, but are more reactive than alkanes. Most reactions of alkenes involve additions to this pi bond, forming new single bonds. Alkenes serve as a feedstock for the petrochemical industry because they can participate in a wide variety of reactions, prominently polymerization and alkylation. Except for ethylene, alkenes have two sites of reactivity: the carbon–carbon pi-bond and the presence of allylic CH centers. The former dominates but the allylic sites are important too. Addition to the unsaturated bonds Hydrogenation involves the addition of H2 ,resulting in an alkane. The equation of hydrogenation of ethylene to form ethane is: H2C=CH2 + H2→H3C−CH3 Hydrogenation reactions usually require catalysts to increase their reaction rate. The total number of hydrogens that can be added to an unsaturated hydrocarbon depends on its degree of unsaturation. Similarly, halogenation involves the addition of a halogen molecule, such as Br2, resulting in a dihaloalkane. The equation of bromination of ethylene to form ethane is: H2C=CH2 + Br2→H2CBr−CH2Br Unlike hydrogenation, these halogenation reactions do not require catalysts. The reaction occurs in two steps, with a halonium ion as an intermediate. Bromine test is used to test the saturation of hydrocarbons. The bromine test can also be used as an indication of the degree of unsaturation for unsaturated hydrocarbons. Bromine number is defined as gram of bromine able to react with 100g of product. Similar as hydrogenation, the halogenation of bromine is also depend on the number of π bond. A higher bromine number indicates higher degree of unsaturation. The π bonds of alkenes hydrocarbons are also susceptible to hydration. The reaction usually involves strong acid as catalyst. The first step in hydration often involves formation of a carbocation. The net result of the reaction will be an alcohol. The reaction equation for hydration of ethylene is: H2C=CH2 + H2O→ Hydrohalogenation involves addition of H−X to unsaturated hydrocarbons. This reaction results in new C−H and C−X σ bonds. The formation of the intermediate carbocation is selective and follows Markovnikov's rule. The hydrohalogenation of alkene will result in haloalkane. The reaction equation of HBr addition to ethylene is: H2C=CH2 + HBr → Cycloaddition Alkenes add to dienes to give cyclohexenes. This conversion is an example of a Diels-Alder reaction. Such reaction proceed with retention of stereochemistry. The rates are sensitive to electron-withdrawing or electron-donating substituents. When irradiated by UV-light, alkenes dimerize to give cyclobutanes. Another example is the Schenck ene reaction, in which singlet oxygen reacts with an allylic structure to give a transposed allyl peroxide: Oxidation Alkenes react with percarboxylic acids and even hydrogen peroxide to yield epoxides: For ethylene, the epoxidation is conducted on a very large scale industrially using oxygen in the presence of silver-based catalysts: Alkenes react with ozone, leading to the scission of the double bond. The process is called ozonolysis. Often the reaction procedure includes a mild reductant, such as dimethylsulfide (): When treated with a hot concentrated, acidified solution of , alkenes are cleaved to form ketones and/or carboxylic acids. The stoichiometry of the reaction is sensitive to conditions. This reaction and the ozonolysis can be used to determine the position of a double bond in an unknown alkene. The oxidation can be stopped at the vicinal diol rather than full cleavage of the alkene by using osmium tetroxide or other oxidants: R'CH=CR2 + 1/2 O2 + H2O -> R'CH(OH)-C(OH)R2 This reaction is called dihydroxylation. In the presence of an appropriate photosensitiser, such as methylene blue and light, alkenes can undergo reaction with reactive oxygen species generated by the photosensitiser, such as hydroxyl radicals, singlet oxygen or superoxide ion. Reactions of the excited sensitizer can involve electron or hydrogen transfer, usually with a reducing substrate (Type I reaction) or interaction with oxygen (Type II reaction). These various alternative processes and reactions can be controlled by choice of specific reaction conditions, leading to a wide range of products. A common example is the [4+2]-cycloaddition of singlet oxygen with a diene such as cyclopentadiene to yield an endoperoxide: Polymerization Terminal alkenes are precursors to polymers via processes termed polymerization. Some polymerizations are of great economic significance, as they generate the plastics polyethylene and polypropylene. Polymers from alkene are usually referred to as polyolefins although they contain no olefins. Polymerization can proceed via diverse mechanisms. Conjugated dienes such as buta-1,3-diene and isoprene (2-methylbuta-1,3-diene) also produce polymers, one example being natural rubber. Allylic substitution The presence of a C=C π bond in unsaturated hydrocarbons weakens the dissociation energy of the allylic C−H bonds. Thus, these groupings are susceptible to free radical substitution at these C-H sites as well as addition reactions at the C=C site. In the presence of radical initiators, allylic C-H bonds can be halogenated. The presence of two C=C bonds flanking one methylene, i.e., doubly allylic, results in particularly weak HC-H bonds. The high reactivity of these situations is the basis for certain free radical reactions, manifested in the chemistry of drying oils. Metathesis Alkenes undergo olefin metathesis, which cleaves and interchanges the substituents of the alkene. A related reaction is ethenolysis: Metal complexation In transition metal alkene complexes, alkenes serve as ligands for metals. In this case, the π electron density is donated to the metal d orbitals. The stronger the donation is, the stronger the back bonding from the metal d orbital to π* anti-bonding orbital of the alkene. This effect lowers the bond order of the alkene and increases the C-C bond length. One example is the complex . These complexes are related to the mechanisms of metal-catalyzed reactions of unsaturated hydrocarbons. Reaction overview Synthesis Industrial methods Alkenes are produced by hydrocarbon cracking. Raw materials are mostly natural-gas condensate components (principally ethane and propane) in the US and Mideast and naphtha in Europe and Asia. Alkanes are broken apart at high temperatures, often in the presence of a zeolite catalyst, to produce a mixture of primarily aliphatic alkenes and lower molecular weight alkanes. The mixture is feedstock and temperature dependent, and separated by fractional distillation. This is mainly used for the manufacture of small alkenes (up to six carbons). Related to this is catalytic dehydrogenation, where an alkane loses hydrogen at high temperatures to produce a corresponding alkene. This is the reverse of the catalytic hydrogenation of alkenes. This process is also known as reforming. Both processes are endothermic and are driven towards the alkene at high temperatures by entropy. Catalytic synthesis of higher α-alkenes (of the type RCH=CH2) can also be achieved by a reaction of ethylene with the organometallic compound triethylaluminium in the presence of nickel, cobalt, or platinum. Elimination reactions One of the principal methods for alkene synthesis in the laboratory is the elimination reaction of alkyl halides, alcohols, and similar compounds. Most common is the β-elimination via the E2 or E1 mechanism. A commercially significant example is the production of vinyl chloride. The E2 mechanism provides a more reliable β-elimination method than E1 for most alkene syntheses. Most E2 eliminations start with an alkyl halide or alkyl sulfonate ester (such as a tosylate or triflate). When an alkyl halide is used, the reaction is called a dehydrohalogenation. For unsymmetrical products, the more substituted alkenes (those with fewer hydrogens attached to the C=C) tend to predominate (see Zaitsev's rule). Two common methods of elimination reactions are dehydrohalogenation of alkyl halides and dehydration of alcohols. A typical example is shown below; note that if possible, the H is anti to the leaving group, even though this leads to the less stable Z-isomer. Alkenes can be synthesized from alcohols via dehydration, in which case water is lost via the E1 mechanism. For example, the dehydration of ethanol produces ethylene: CH3CH2OH → H2C=CH2 + H2O An alcohol may also be converted to a better leaving group (e.g., xanthate), so as to allow a milder syn-elimination such as the Chugaev elimination and the Grieco elimination. Related reactions include eliminations by β-haloethers (the Boord olefin synthesis) and esters (ester pyrolysis). A thioketone and a phosphite ester combined (the Corey-Winter olefination) or diphosphorus tetraiodide will deoxygenate glycols to alkenes. Alkenes can be prepared indirectly from alkyl amines. The amine or ammonia is not a suitable leaving group, so the amine is first either alkylated (as in the Hofmann elimination) or oxidized to an amine oxide (the Cope reaction) to render a smooth elimination possible. The Cope reaction is a syn-elimination that occurs at or below 150 °C, for example: The Hofmann elimination is unusual in that the less substituted (non-Zaitsev) alkene is usually the major product. Alkenes are generated from α-halosulfones in the Ramberg–Bäcklund reaction, via a three-membered ring sulfone intermediate. Synthesis from carbonyl compounds Another important class of methods for alkene synthesis involves construction of a new carbon–carbon double bond by coupling or condensation of a carbonyl compound (such as an aldehyde or ketone) to a carbanion or its equivalent. Pre-eminent is the aldol condensation. Knoevenagel condensations are a related class of reactions that convert carbonyls into alkenes.Well-known methods are called olefinations. The Wittig reaction is illustrative, but other related methods are known, including the Horner–Wadsworth–Emmons reaction. The Wittig reaction involves reaction of an aldehyde or ketone with a Wittig reagent (or phosphorane) of the type Ph3P=CHR to produce an alkene and Ph3P=O. The Wittig reagent is itself prepared easily from triphenylphosphine and an alkyl halide. Related to the Wittig reaction is the Peterson olefination, which uses silicon-based reagents in place of the phosphorane. This reaction allows for the selection of E- or Z-products. If an E-product is desired, another alternative is the Julia olefination, which uses the carbanion generated from a phenyl sulfone. The Takai olefination based on an organochromium intermediate also delivers E-products. A titanium compound, Tebbe's reagent, is useful for the synthesis of methylene compounds; in this case, even esters and amides react. A pair of ketones or aldehydes can be deoxygenated to generate an alkene. Symmetrical alkenes can be prepared from a single aldehyde or ketone coupling with itself, using titanium metal reduction (the McMurry reaction). If different ketones are to be coupled, a more complicated method is required, such as the Barton–Kellogg reaction. A single ketone can also be converted to the corresponding alkene via its tosylhydrazone, using sodium methoxide (the Bamford–Stevens reaction) or an alkyllithium (the Shapiro reaction). Synthesis from alkenes The formation of longer alkenes via the step-wise polymerisation of smaller ones is appealing, as ethylene (the smallest alkene) is both inexpensive and readily available, with hundreds of millions of tonnes produced annually. The Ziegler–Natta process allows for the formation of very long chains, for instance those used for polyethylene. Where shorter chains are wanted, as they for the production of surfactants, then processes incorporating a olefin metathesis step, such as the Shell higher olefin process are important. Olefin metathesis is also used commercially for the interconversion of ethylene and 2-butene to propylene. Rhenium- and molybdenum-containing heterogeneous catalysis are used in this process: CH2=CH2 + CH3CH=CHCH3 → 2 CH2=CHCH3 Transition metal catalyzed hydrovinylation is another important alkene synthesis process starting from alkene itself. It involves the addition of a hydrogen and a vinyl group (or an alkenyl group) across a double bond. From alkynes Reduction of alkynes is a useful method for the stereoselective synthesis of disubstituted alkenes. If the cis-alkene is desired, hydrogenation in the presence of Lindlar's catalyst (a heterogeneous catalyst that consists of palladium deposited on calcium carbonate and treated with various forms of lead) is commonly used, though hydroboration followed by hydrolysis provides an alternative approach. Reduction of the alkyne by sodium metal in liquid ammonia gives the trans-alkene. For the preparation multisubstituted alkenes, carbometalation of alkynes can give rise to a large variety of alkene derivatives. Rearrangements and related reactions Alkenes can be synthesized from other alkenes via rearrangement reactions. Besides olefin metathesis (described above), many pericyclic reactions can be used such as the ene reaction and the Cope rearrangement. In the Diels–Alder reaction, a cyclohexene derivative is prepared from a diene and a reactive or electron-deficient alkene. Application Unsaturated hydrocarbons are widely used to produce plastics, medicines, and other useful materials. Natural occurrence Alkenes are prevalent in nature. Plants are the main natural source of alkenes in the form of terpenes. Many of the most vivid natural pigments are terpenes; e.g. lycopene (red in tomatoes), carotene (orange in carrots), and xanthophylls (yellow in egg yolk). The simplest of all alkenes, ethylene is a signaling molecule that influences the ripening of plants. IUPAC Nomenclature Although the nomenclature is not followed widely, according to IUPAC, an alkene is an acyclic hydrocarbon with just one double bond between carbon atoms. Olefins comprise a larger collection of cyclic and acyclic alkenes as well as dienes and polyenes. To form the root of the IUPAC names for straight-chain alkenes, change the -an- infix of the parent to -en-. For example, CH3-CH3 is the alkane ethANe. The name of CH2=CH2 is therefore ethENe. For straight-chain alkenes with 4 or more carbon atoms, that name does not completely identify the compound. For those cases, and for branched acyclic alkenes, the following rules apply: Find the longest carbon chain in the molecule. If that chain does not contain the double bond, name the compound according to the alkane naming rules. Otherwise: Number the carbons in that chain starting from the end that is closest to the double bond. Define the location k of the double bond as being the number of its first carbon. Name the side groups (other than hydrogen) according to the appropriate rules. Define the position of each side group as the number of the chain carbon it is attached to. Write the position and name of each side group. Write the names of the alkane with the same chain, replacing the "-ane" suffix by "k-ene". The position of the double bond is often inserted before the name of the chain (e.g. "2-pentene"), rather than before the suffix ("pent-2-ene"). The positions need not be indicated if they are unique. Note that the double bond may imply a different chain numbering than that used for the corresponding alkane: C–– is "2,2-dimethyl pentane", whereas C–= is "3,3-dimethyl 1-pentene". More complex rules apply for polyenes and cycloalkenes. Cis–trans isomerism If the double bond of an acyclic mono-ene is not the first bond of the chain, the name as constructed above still does not completely identify the compound, because of cis–trans isomerism. Then one must specify whether the two single C–C bonds adjacent to the double bond are on the same side of its plane, or on opposite sides. For monoalkenes, the configuration is often indicated by the prefixes cis- (from Latin "on this side of") or trans- ("across", "on the other side of") before the name, respectively; as in cis-2-pentene or trans-2-butene. More generally, cis–trans isomerism will exist if each of the two carbons of in the double bond has two different atoms or groups attached to it. Accounting for these cases, the IUPAC recommends the more general E–Z notation, instead of the cis and trans prefixes. This notation considers the group with highest CIP priority in each of the two carbons. If these two groups are on opposite sides of the double bond's plane, the configuration is labeled E (from the German entgegen meaning "opposite"); if they are on the same side, it is labeled Z (from German zusammen, "together"). This labeling may be taught with mnemonic "Z means 'on ze zame zide'". Groups containing C=C double bonds IUPAC recognizes two names for hydrocarbon groups containing carbon–carbon double bonds, the vinyl group and the allyl group. See also Alpha-olefin Annulene Aromatic hydrocarbon ("Arene") Dendralene Nitroalkene Radialene Nomenclature links Rule A-3. Unsaturated Compounds and Univalent Radicals IUPAC Blue Book. Rule A-4. Bivalent and Multivalent Radicals IUPAC Blue Book. Rules A-11.3, A-11.4, A-11.5 Unsaturated monocyclic hydrocarbons and substituents IUPAC Blue Book. Rule A-23. Hydrogenated Compounds of Fused Polycyclic Hydrocarbons IUPAC Blue Book. References
Alkene
[ "Chemistry" ]
6,385
[ "Organic compounds", "Hydrocarbons", "Alkenes" ]
2,763
https://en.wikipedia.org/wiki/Alkyne
Acetylene Propyne 1-Butyne In organic chemistry, an alkyne is an unsaturated hydrocarbon containing at least one carbon—carbon triple bond. The simplest acyclic alkynes with only one triple bond and no other functional groups form a homologous series with the general chemical formula . Alkynes are traditionally known as acetylenes, although the name acetylene also refers specifically to , known formally as ethyne using IUPAC nomenclature. Like other hydrocarbons, alkynes are generally hydrophobic. Structure and bonding In acetylene, the H–C≡C bond angles are 180°. By virtue of this bond angle, alkynes are rod-like. Correspondingly, cyclic alkynes are rare. Benzyne cannot be isolated. The C≡C bond distance of 118 picometers (for C2H2) is much shorter than the C=C distance in alkenes (132 pm, for C2H4) or the C–C bond in alkanes (153 pm). The triple bond is very strong with a bond strength of 839 kJ/mol. The sigma bond contributes 369 kJ/mol, the first pi bond contributes 268 kJ/mol. and the second pi bond 202 kJ/mol. Bonding is usually discussed in the context of molecular orbital theory, which recognizes the triple bond as arising from overlap of s and p orbitals. In the language of valence bond theory, the carbon atoms in an alkyne bond are sp hybridized: they each have two unhybridized p orbitals and two sp hybrid orbitals. Overlap of an sp orbital from each atom forms one sp–sp sigma bond. Each p orbital on one atom overlaps one on the other atom, forming two pi bonds, giving a total of three bonds. The remaining sp orbital on each atom can form a sigma bond to another atom, for example to hydrogen atoms in the parent acetylene. The two sp orbitals project on opposite sides of the carbon atom. Terminal and internal alkynes Internal alkynes feature carbon substituents on each acetylenic carbon. Symmetrical examples include diphenylacetylene and 3-hexyne. They may also be asymmetrical, such as in 2-pentyne. Terminal alkynes have the formula , where at least one end of the alkyne is a hydrogen atom. An example is methylacetylene (propyne using IUPAC nomenclature). They are often prepared by alkylation of monosodium acetylide. Terminal alkynes, like acetylene itself, are mildly acidic, with pKa values of around 25. They are far more acidic than alkenes and alkanes, which have pKa values of around 40 and 50, respectively. The acidic hydrogen on terminal alkynes can be replaced by a variety of groups resulting in halo-, silyl-, and alkoxoalkynes. The carbanions generated by deprotonation of terminal alkynes are called acetylides. Internal alkynes are also considerably more acidic than alkenes and alkanes, though not nearly as acidic as terminal alkynes. The C–H bonds at the α position of alkynes (propargylic C–H bonds) can also be deprotonated using strong bases, with an estimated pKa of 35. This acidity can be used to isomerize internal alkynes to terminal alkynes using the alkyne zipper reaction. Naming alkynes In systematic chemical nomenclature, alkynes are named with the Greek prefix system without any additional letters. Examples include ethyne or octyne. In parent chains with four or more carbons, it is necessary to say where the triple bond is located. For octyne, one can either write 3-octyne or oct-3-yne when the bond starts at the third carbon. The lowest number possible is given to the triple bond. When no superior functional groups are present, the parent chain must include the triple bond even if it is not the longest possible carbon chain in the molecule. Ethyne is commonly called by its trivial name acetylene. In chemistry, the suffix -yne is used to denote the presence of a triple bond. In organic chemistry, the suffix often follows IUPAC nomenclature. However, inorganic compounds featuring unsaturation in the form of triple bonds may be denoted by substitutive nomenclature with the same methods used with alkynes (i.e. the name of the corresponding saturated compound is modified by replacing the "-ane" ending with "-yne"). "-diyne" is used when there are two triple bonds, and so on. The position of unsaturation is indicated by a numerical locant immediately preceding the "-yne" suffix, or 'locants' in the case of multiple triple bonds. Locants are chosen so that the numbers are low as possible. "-yne" is also used as a suffix to name substituent groups that are triply bound to the parent compound. Sometimes a number between hyphens is inserted before it to state which atoms the triple bond is between. This suffix arose as a collapsed form of the end of the word "acetylene". The final "-e" disappears if it is followed by another suffix that starts with a vowel. Structural isomerism Alkynes having four or more carbon atoms can form different structural isomers by having the triple bond in different positions or having some of the carbon atoms be substituents rather than part of the parent chain. Other non-alkyne structural isomers are also possible. : acetylene only : propyne only : 2 isomers: 1-butyne, and 2-butyne : 3 isomers: 1-pentyne, 2-pentyne, and 3-methyl-1-butyne : 7 isomers: 1-hexyne, 2-hexyne, 3-hexyne, 4-methyl-1-pentyne, 4-methyl-2-pentyne, 3-methyl-1-pentyne, 3,3-dimethyl-1-butyne Synthesis From calcium carbide Classically, acetylene was prepared by hydrolysis (protonation) of calcium carbide (Ca2+[:C≡C:]2–): Ca^{2+}[C#C]^2- + 2 HOH -> HC#CH + Ca^{2+}[(HO^{-})2] which was in turn synthesized by combining quicklime and coke in an electric arc furnace at 2200 °C: CaO + 3 C (amorphous) -> CaC2 + CO This was an industrially important process which provided access to hydrocarbons from coal resources for countries like Germany and China. However, the energy-intensive nature of this process is a major disadvantage and its share of the world's production of acetylene has steadily decreased relative to hydrocarbon cracking. Cracking Commercially, the dominant alkyne is acetylene itself, which is used as a fuel and a precursor to other compounds, e.g., acrylates. Hundreds of millions of kilograms are produced annually by partial oxidation of natural gas: 2 CH4 + 3/2 O2 -> HC#CH + 3 H2O Propyne, also industrially useful, is also prepared by thermal cracking of hydrocarbons. Alkylation and arylation of terminal alkynes Terminal alkynes (RC≡CH, including acetylene itself) can be deprotonated by bases like NaNH2, BuLi, or EtMgBr to give acetylide anions (RC≡C:–M+, M = Na, Li, MgBr) which can be alkylated by addition to carbonyl groups (Favorskii reaction), ring opening of epoxides, or SN2-type substitution of unhindered primary alkyl halides. In the presence of transition metal catalysts, classically a combination of Pd(PPh3)2Cl2 and CuI, terminal acetylenes (RC≡CH) can react with aryl iodides and bromides (ArI or ArBr) in the presence of a secondary or tertiary amine like Et3N to give arylacetylenes (RC≡CAr) in the Sonogashira reaction. The availability of these reliable reactions makes terminal alkynes useful building blocks for preparing internal alkynes. Dehydrohalogenation and related reactions Alkynes are prepared from 1,1- and 1,2-dihaloalkanes by double dehydrohalogenation. The reaction provides a means to generate alkynes from alkenes, which are first halogenated and then dehydrohalogenated. For example, phenylacetylene can be generated from styrene by bromination followed by treatment of the resulting of 1,2-dibromo-1-phenylethane with sodium amide in ammonia: Via the Fritsch–Buttenberg–Wiechell rearrangement, alkynes are prepared from vinyl bromides. Alkynes can be prepared from aldehydes using the Corey–Fuchs reaction and from aldehydes or ketones by the Seyferth–Gilbert homologation. Vinyl halides are susceptible to dehydrohalogenation. Reactions, including applications Featuring a reactive functional group, alkynes participate in many organic reactions. Such use was pioneered by Ralph Raphael, who in 1955 wrote the first book describing their versatility as intermediates in synthesis. In spite of their kinetic stability (persistence) due to their strong triple bonds, alkynes are a thermodynamically unstable functional group, as can be gleaned from the highly positive heats of formation of small alkynes. For example, acetylene has a heat of formation of +227.4 kJ/mol (+54.2 kcal/mol), indicating a much higher energy content compared to its constituent elements. The highly exothermic combustion of acetylene is exploited industrially in oxyacetylene torches used in welding. Other reactions involving alkynes are often highly thermodynamically favorable (exothermic/exergonic) for the same reason. Hydrogenation Being more unsaturated than alkenes, alkynes characteristically undergo reactions that show that they are "doubly unsaturated". Alkynes are capable of adding two equivalents of , whereas an alkene adds only one equivalent. Depending on catalysts and conditions, alkynes add one or two equivalents of hydrogen. Partial hydrogenation, stopping after the addition of only one equivalent to give the alkene, is usually more desirable since alkanes are less useful: The largest scale application of this technology is the conversion of acetylene to ethylene in refineries (the steam cracking of alkanes yields a few percent acetylene, which is selectively hydrogenated in the presence of a palladium/silver catalyst). For more complex alkynes, the Lindlar catalyst is widely recommended to avoid formation of the alkane, for example in the conversion of phenylacetylene to styrene. Similarly, halogenation of alkynes gives the alkene dihalides or alkyl tetrahalides: RCH=CR'H + H2 -> RCH2CR'H2 The addition of one equivalent of to internal alkynes gives cis-alkenes. Addition of halogens and related reagents Alkynes characteristically are capable of adding two equivalents of halogens and hydrogen halides. RC#CR' + 2 Br2 -> RCBr2CR'Br2 The addition of nonpolar bonds across is general for silanes, boranes, and related hydrides. The hydroboration of alkynes gives vinylic boranes which oxidize to the corresponding aldehyde or ketone. In the thiol-yne reaction the substrate is a thiol. Addition of hydrogen halides has long been of interest. In the presence of mercuric chloride as a catalyst, acetylene and hydrogen chloride react to give vinyl chloride. While this method has been abandoned in the West, it remains the main production method in China. Hydration The hydration reaction of acetylene gives acetaldehyde. The reaction proceeds by formation of vinyl alcohol, which tautomerizes to form the aldehyde. This reaction was once a major industrial process but it has been displaced by the Wacker process. This reaction occurs in nature, the catalyst being acetylene hydratase. Hydration of phenylacetylene gives acetophenone: PhC#CH + H2O -> PhCOCH3 catalyzes hydration of 1,8-nonadiyne to 2,8-nonanedione: HC#C(CH2)5C#CH + 2H2O -> CH3CO(CH2)5COCH3 Isomerization to allenes Alkynes can be isomerized by strong base or transition metals to allenes. Due to their comparable thermodynamic stabilities, the equilibrium constant of alkyne/allene isomerization is generally within several orders of magnitude of unity. For example propyne can be isomerized to give an equilibrium mixture with propadiene: HC#C-CH3 <=> CH2=C=CH2 Cycloadditions and oxidation Alkynes undergo diverse cycloaddition reactions. The Diels–Alder reaction with 1,3-dienes gives 1,4-cyclohexadienes. This general reaction has been extensively developed. Electrophilic alkynes are especially effective dienophiles. The "cycloadduct" derived from the addition of alkynes to 2-pyrone eliminates carbon dioxide to give the aromatic compound. Other specialized cycloadditions include multicomponent reactions such as alkyne trimerisation to give aromatic compounds and the [2+2+1]-cycloaddition of an alkyne, alkene and carbon monoxide in the Pauson–Khand reaction. Non-carbon reagents also undergo cyclization, e.g. azide alkyne Huisgen cycloaddition to give triazoles. Cycloaddition processes involving alkynes are often catalyzed by metals, e.g. enyne metathesis and alkyne metathesis, which allows the scrambling of carbyne (RC) centers: RC#CR + R'C#CR' <=> 2RC#CR' Oxidative cleavage of alkynes proceeds via cycloaddition to metal oxides. Most famously, potassium permanganate converts alkynes to a pair of carboxylic acids. Reactions specific for terminal alkynes Terminal alkynes are readily converted to many derivatives, e.g. by coupling reactions and condensations. Via the condensation with formaldehyde and acetylene is produced butynediol: 2CH2O + HC#CH -> HOCH2CCCH2OH In the Sonogashira reaction, terminal alkynes are coupled with aryl or vinyl halides: This reactivity exploits the fact that terminal alkynes are weak acids, whose typical pKa values around 25 place them between that of ammonia (35) and ethanol (16): RC#CH + MX -> RC#CM + HX where MX = NaNH2, LiBu, or RMgX. The reactions of alkynes with certain metal cations, e.g. and also gives acetylides. Thus, few drops of diamminesilver(I) hydroxide () reacts with terminal alkynes signaled by formation of a white precipitate of the silver acetylide. This reactivity is the basis of alkyne coupling reactions, including the Cadiot–Chodkiewicz coupling, Glaser coupling, and the Eglinton coupling shown below: 2R-\!{\equiv}\!-H ->[\ce{Cu(OAc)2}][\ce{pyridine}] R-\!{\equiv}\!-\!{\equiv}\!-R In the Favorskii reaction and in alkynylations in general, terminal alkynes add to carbonyl compounds to give the hydroxyalkyne. Metal complexes Alkynes form complexes with transition metals. Such complexes occur also in metal catalyzed reactions of alkynes such as alkyne trimerization. Terminal alkynes, including acetylene itself, react with water to give aldehydes. The transformation typically requires metal catalysts to give this anti-Markovnikov addition result. Alkynes in nature and medicine According to Ferdinand Bohlmann, the first naturally occurring acetylenic compound, dehydromatricaria ester, was isolated from an Artemisia species in 1826. In the nearly two centuries that have followed, well over a thousand naturally occurring acetylenes have been discovered and reported. Polyynes, a subset of this class of natural products, have been isolated from a wide variety of plant species, cultures of higher fungi, bacteria, marine sponges, and corals. Some acids like tariric acid contain an alkyne group. Diynes and triynes, species with the linkage RC≡C–C≡CR′ and RC≡C–C≡C–C≡CR′ respectively, occur in certain plants (Ichthyothere, Chrysanthemum, Cicuta, Oenanthe and other members of the Asteraceae and Apiaceae families). Some examples are cicutoxin, oenanthotoxin, and falcarinol. These compounds are highly bioactive, e.g. as nematocides. 1-Phenylhepta-1,3,5-triyne is illustrative of a naturally occurring triyne. Alkynes occur in some pharmaceuticals, including the contraceptive noretynodrel. A carbon–carbon triple bond is also present in marketed drugs such as the antiretroviral Efavirenz and the antifungal Terbinafine. Molecules called ene-diynes feature a ring containing an alkene ("ene") between two alkyne groups ("diyne"). These compounds, e.g. calicheamicin, are some of the most aggressive antitumor drugs known, so much so that the ene-diyne subunit is sometimes referred to as a "warhead". Ene-diynes undergo rearrangement via the Bergman cyclization, generating highly reactive radical intermediates that attack DNA within the tumor. See also -yne cycloalkyne References
Alkyne
[ "Chemistry" ]
4,051
[ "Organic compounds", "Hydrocarbons", "Alkynes" ]
2,766
https://en.wikipedia.org/wiki/Ames%20test
The Ames test is a widely employed method that uses bacteria to test whether a given chemical can cause mutations in the DNA of the test organism. More formally, it is a biological assay to assess the mutagenic potential of chemical compounds. A positive test indicates that the chemical is mutagenic and therefore may act as a carcinogen, because cancer is often linked to mutation. The test serves as a quick and convenient assay to estimate the carcinogenic potential of a compound because standard carcinogen assays on mice and rats are time-consuming (taking two to three years to complete) and expensive. However, false-positives and false-negatives are known. The procedure was described in a series of papers in the early 1970s by Bruce Ames and his group at the University of California, Berkeley. General procedure The Ames test uses several strains of the bacterium Salmonella typhimurium that carry mutations in genes involved in histidine synthesis. These strains are auxotrophic mutants, i.e. they require histidine for growth, but cannot produce it. The method tests the capability of the tested substance in creating mutations that result in a return to a "prototrophic" state, so that the cells can grow on a histidine-free medium. The tester strains are specially constructed to detect either frameshift (e.g. strains TA-1537 and TA-1538) or point (e.g. strain TA-1531) mutations in the genes required to synthesize histidine, so that mutagens acting via different mechanisms may be identified. Some compounds are quite specific, causing reversions in just one or two strains. The tester strains also carry mutations in the genes responsible for lipopolysaccharide synthesis, making the cell wall of the bacteria more permeable, and in the excision repair system to make the test more sensitive. Larger organisms like mammals have metabolic processes that could potentially turn a chemical considered not mutagenic into one that is or one that is considered mutagenic into one that is not. Therefore, to more effectively test a chemical compound's mutagenicity in relation to larger organisms, rat liver enzymes can be added in an attempt to replicate the metabolic processes' effect on the compound being tested in the Ames Test. Rat liver extract is optionally added to simulate the effect of metabolism, as some compounds, like benzo[a]pyrene, are not mutagenic themselves but their metabolic products are. The bacteria are spread on an agar plate with a small amount of histidine. This small amount of histidine in the growth medium allows the bacteria to grow for an initial time and have the opportunity to mutate. When the histidine is depleted only bacteria that have mutated to gain the ability to produce its own histidine will survive. The plate is incubated for 48 hours. The mutagenicity of a substance is proportional to the number of colonies observed. Ames test and carcinogens Mutagens identified via Ames test are also possible carcinogens, and early studies by Ames showed that 90% of known carcinogens may be identified via this test. Later studies however showed identification of 50–70% of known carcinogens. The test was used to identify a number of compounds previously used in commercial products as potential carcinogens. Examples include tris(2,3-dibromopropyl)phosphate, which was used as a flame retardant in plastic and textiles such as children's sleepwear, and furylfuramide which was used as an antibacterial additive in food in Japan in the 1960s and 1970s. Furylfuramide in fact had previously passed animal tests, but more vigorous tests after its identification in the Ames test showed it to be carcinogenic. Their positive tests resulted in those chemicals being withdrawn from use in consumer products. One interesting result from the Ames test is that the dose response curve using varying concentrations of the chemical is almost always linear, indicating that there is no threshold concentration for mutagenesis. It therefore suggests that, as with radiation, there may be no safe threshold for chemical mutagens or carcinogens. However, some have proposed that organisms could tolerate low levels of mutagens due to protective mechanisms such as DNA repair, and thus a threshold may exist for certain chemical mutagens. Bruce Ames himself argued against linear dose-response extrapolation from the high dose used in carcinogenesis tests in animal systems to the lower dose of chemicals normally encountered in human exposure, as the results may be false positives due to mitogenic response caused by the artificially high dose of chemicals used in such tests. He also cautioned against the "hysteria over tiny traces of chemicals that may or may not cause cancer", that "completely drives out the major risks you should be aware of". The Ames test is often used as one of the initial screens for potential drugs to weed out possible carcinogens, and it is one of the eight tests required under the Pesticide Act (USA) and one of the six tests required under the Toxic Substances Control Act (USA). Limitations Salmonella typhimurium is a prokaryote, therefore it is not a perfect model for humans. Rat liver S9 fraction is used to mimic the mammalian metabolic conditions so that the mutagenic potential of metabolites formed by a parent molecule in the hepatic system can be assessed; however, there are differences in metabolism between humans and rats that can affect the mutagenicity of the chemicals being tested. The test may therefore be improved by the use of human liver S9 fraction; its use was previously limited by its availability, but it is now available commercially and therefore may be more feasible. An adapted in vitro model has been made for eukaryotic cells, for example yeast. Mutagens identified in the Ames test need not necessarily be carcinogenic, and further tests are required for any potential carcinogen identified in the test. Drugs that contain the nitrate moiety sometimes come back positive for Ames when they are indeed safe. The nitrate compounds may generate nitric oxide, an important signal molecule that can give a false positive. Nitroglycerin is an example that gives a positive Ames yet is still used in treatment today. Nitrates in food however may be reduced by bacterial action to nitrites which are known to generate carcinogens by reacting with amines and amides. Long toxicology and outcome studies are needed with such compounds to disprove a positive Ames test. Fluctuation method The Ames test was initially developed using agar plates (the plate incorporation technique), as described above. Since that time, an alternative to performing the Ames test has been developed, which is known as the "fluctuation method". This technique is the same in concept as the agar-based method, with bacteria being added to a reaction mixture with a small amount of histidine, which allows the bacteria to grow and mutate, returning to synthesize their own histidine. By including a pH indicator, the frequency of mutation is counted in microplates as the number of wells which have changed color (caused by a drop in pH due to metabolic processes of reproducing bacteria). As with the traditional Ames test, the sample is compared to the natural background rate of reverse mutation in order to establish the genotoxicity of a substance. The fluctuation method is performed entirely in liquid culture and is scored by counting the number of wells that turn yellow from purple in 96-well or 384-well microplates. In the 96-well plate method the frequency of mutation is counted as the number of wells out of 96 which have changed color. The plates are incubated for up to five days, with mutated (yellow) colonies being counted each day and compared to the background rate of reverse mutation using established tables of significance to determine the significant differences between the background rate of mutation and that for the tested samples. In the more scaled-down 384-well plate microfluctuation method the frequency of mutation is counted as the number of wells out of 48 which have changed color after 2 days of incubation. A test sample is assayed across 6 dose levels with concurrent zero-dose (background) and positive controls which all fit into one 384-well plate. The assay is performed in triplicates to provide statistical robustness. It uses the recommended OECD Guideline 471 tester strains (histidine auxotrophs and tryptophan auxotrophs). The fluctuation method is comparable to the traditional pour plate method in terms of sensitivity and accuracy, however, it does have a number of advantages: it needs less test sample, it has a simple colorimetric endpoint, counting the number of positive wells out of possible 96 or 48 wells is much less time-consuming than counting individual colonies on an agar plate. Several commercial kits are available. Most kits have consumable components in a ready-to-use state, including lyophilized bacteria, and tests can be performed using multichannel pipettes. The fluctuation method also allows for testing higher volumes of aqueous samples (up to 75% v/v), increasing the sensitivity and extending its application to low-level environmental mutagens. References Further reading Applied genetics Biochemistry detection reactions Laboratory techniques Toxicology tests
Ames test
[ "Chemistry", "Biology", "Environmental_science" ]
1,946
[ "Toxicology", "Biochemistry detection reactions", "Biochemical reactions", "Microbiology techniques", "nan", "Toxicology tests" ]
2,770
https://en.wikipedia.org/wiki/Anatomical%20Therapeutic%20Chemical%20Classification%20System
The Anatomical Therapeutic Chemical (ATC) Classification System is a drug classification system that classifies the active ingredients of drugs according to the organ or system on which they act and their therapeutic, pharmacological and chemical properties. Its purpose is an aid to monitor drug use and for research to improve quality medication use. It does not imply drug recommendation or efficacy. It is controlled by the World Health Organization Collaborating Centre for Drug Statistics Methodology (WHOCC), and was first published in 1976. Coding system This pharmaceutical coding system divides drugs into different groups according to the organ or system on which they act, their therapeutic intent or nature, and the drug's chemical characteristics. Different brands share the same code if they have the same active substance and indications. Each bottom-level ATC code stands for a pharmaceutically used substance, or a combination of substances, in a single indication (or use). This means that one drug can have more than one code, for example acetylsalicylic acid (aspirin) has as a drug for local oral treatment, as a platelet inhibitor, and as an analgesic and antipyretic; as well as one code can represent more than one active ingredient, for example is the combination of perindopril with amlodipine, two active ingredients that have their own codes ( and respectively) when prescribed alone. The ATC classification system is a strict hierarchy, meaning that each code necessarily has one and only one parent code, except for the 14 codes at the topmost level which have no parents. The codes are semantic identifiers, meaning they depict information by themselves beyond serving as identifiers (namely, the codes depict themselves the complete lineage of parenthood). As of 7 May 2020, there are 6,331 codes in ATC; the table below gives the count per level. History The ATC system is based on the earlier Anatomical Classification System, which is intended as a tool for the pharmaceutical industry to classify pharmaceutical products (as opposed to their active ingredients). This system, confusingly also called ATC, was initiated in 1971 by the European Pharmaceutical Market Research Association (EphMRA) and is being maintained by the EphMRA and Intellus. Its codes are organised into four levels. The WHO's system, having five levels, is an extension and modification of the EphMRA's. It was first published in 1976. Classification In this system, drugs are classified into groups at five different levels: First level The first level of the code indicates the anatomical main group and consists of one letter. There are 14 main groups: Example: C Cardiovascular system Second level The second level of the code indicates the therapeutic subgroup and consists of two digits. Example: C03 Diuretics Third level The third level of the code indicates the therapeutic/pharmacological subgroup and consists of one letter. Example: C03C High-ceiling diuretics Fourth level The fourth level of the code indicates the chemical/therapeutic/pharmacological subgroup and consists of one letter. Example: C03CA Sulfonamides Fifth level The fifth level of the code indicates the chemical substance and consists of two digits. Example: C03CA01 furosemide Other ATC classification systems ATCvet The Anatomical Therapeutic Chemical Classification System for veterinary medicinal products (ATCvet) is used to classify veterinary drugs. ATCvet codes can be created by placing the letter Q in front of the ATC code of most human medications. For example, furosemide for veterinary use has the code QC03CA01. Some codes are used exclusively for veterinary drugs, such as QI Immunologicals, QJ51 Antibacterials for intramammary use or QN05AX90 amperozide. Herbal ATC (HATC) The Herbal ATC system (HATC) is an ATC classification of herbal substances; it differs from the regular ATC system by using 4 digits instead of 2 at the 5th level group. The herbal classification is not adopted by WHO. The Uppsala Monitoring Centre is responsible for the Herbal ATC classification, and it is part of the WHODrug Global portfolio available by subscription. Defined daily dose The ATC system also includes defined daily doses (DDDs) for many drugs. This is a measurement of drug consumption based on the usual daily dose for a given drug. According to the definition, "[t]he DDD is the assumed average maintenance dose per day for a drug used for its main indication in adults." Adaptations and updates National issues of the ATC classification, such as the German Anatomisch-therapeutisch-chemische Klassifikation mit Tagesdosen, may include additional codes and DDDs not present in the WHO version. ATC follows guidelines in creating new codes for newly approved drugs. An application is submitted to WHO for ATC classification and DDD assignment. A preliminary or temporary code is assigned and published on the website and in the WHO Drug Information for comment or objection. New ATC/DDD codes are discussed at the semi-annual Working Group meeting. If accepted it becomes a final decision and published semi-annually on the website and WHO Drug Information and implemented in the annual print/on-line ACT/DDD Index on January 1. Changes to existing ATC/DDD follow a similar process to become temporary codes and if accepted become a final decision as ATC/DDD alterations. ATC and DDD alterations are only valid and implemented in the coming annual updates; the original codes must continue until the end of the year. An updated version of the complete on-line/print ATC index with DDDs is published annually on January 1. See also Classification of Pharmaco-Therapeutic Referrals (CPR) ICD-10 International Classification of Diseases International Classification of Primary Care (ICPC-2) / ICPC-2 PLUS Medical classification Pharmaceutical care Pharmacotherapy RxNorm References External links Quarterly journal providing an overview of topics relating to medicines development and regulation. from EphMRA Anatomical Classification (ATC and NFC) atcd. R script to scrape the ATC data from the WHOCC website; contains link to download entire ATC tree. Drugs Pharmacological classification systems World Health Organization
Anatomical Therapeutic Chemical Classification System
[ "Chemistry", "Biology" ]
1,290
[ "Drugs by target organ system", "Pharmacology", "Products of chemical industry", "Pharmacological classification systems", "Organ systems", "Chemicals in medicine", "Drugs" ]
2,778
https://en.wikipedia.org/wiki/Parallel%20ATA
Parallel ATA (PATA), originally , also known as Integrated Drive Electronics (IDE), is a standard interface designed for IBM PC-compatible computers. It was first developed by Western Digital and Compaq in 1986 for compatible hard drives and CD or DVD drives. The connection is used for storage devices such as hard disk drives, floppy disk drives, optical disc drives, and tape drives in computers. The standard is maintained by the X3/INCITS committee. It uses the underlying (ATA) and Packet Interface (ATAPI) standards. The Parallel ATA standard is the result of a long history of incremental technical development, which began with the original AT Attachment interface, developed for use in early PC AT equipment. The ATA interface itself evolved in several stages from Western Digital's original Integrated Drive Electronics (IDE) interface. As a result, many near-synonyms for ATA/ATAPI and its previous incarnations are still in common informal use, in particular Extended IDE (EIDE) and Ultra ATA (UATA). After the introduction of SATA in 2003, the original ATA was renamed to Parallel ATA, or PATA for short. Parallel ATA cables have a maximum allowable length of . Because of this limit, the technology normally appears as an internal computer storage interface. For many years, ATA provided the most common and the least expensive interface for this application. It has largely been replaced by SATA in newer systems. History and terminology The standard was originally conceived as the "AT Bus Attachment", officially called "AT Attachment" and abbreviated "ATA" because its primary feature was a direct connection to the 16-bit ISA bus introduced with the IBM PC/AT. The original ATA specifications published by the standards committees use the name "AT Attachment". The "AT" in the IBM PC/AT referred to "Advanced Technology" so ATA has also been referred to as "Advanced Technology Attachment". When a newer Serial ATA (SATA) was introduced in 2003, the original ATA was renamed to Parallel ATA, or PATA for short. Physical ATA interfaces became a standard component in all PCs, initially on host bus adapters, sometimes on a sound card but ultimately as two physical interfaces embedded in a Southbridge chip on a motherboard. Called the "primary" and "secondary" ATA interfaces, they were assigned to base addresses 0x1F0 and 0x170 on ISA bus systems. They were replaced by SATA interfaces. IDE and ATA-1 The first version of what is now called the ATA/ATAPI interface was developed by Western Digital under the name Integrated Drive Electronics (IDE). Together with Compaq (the initial customer), they worked with various disk drive manufacturers to develop and ship early products with the goal of remaining software compatible with the existing IBM PC hard drive interface. The first such drives appeared internally in Compaq PCs in 1986 and were first separately offered by Conner Peripherals as the CP342 in June 1987. The term Integrated Drive Electronics refers to the drive controller being integrated into the drive, as opposed to a separate controller situated at the other side of the connection cable to the drive. On an IBM PC compatible, CP/M machine, or similar, this was typically a card installed on a motherboard. The interface cards used to connect a parallel ATA drive to, for example, an ISA Slot, are not drive controllers: they are merely bridges between the host bus and the ATA interface. Since the original ATA interface is essentially just a 16-bit ISA bus, the bridge was especially simple in case of an ATA connector being located on an ISA interface card. The integrated controller presented the drive to the host computer as an array of 512-byte blocks with a relatively simple command interface. This relieved the mainboard and interface cards in the host computer of the chores of stepping the disk head arm, moving the head arm in and out, and so on, as had to be done with earlier ST-506 and ESDI hard drives. All of these low-level details of the mechanical operation of the drive were now handled by the controller on the drive itself. This also eliminated the need to design a single controller that could handle many different types of drives, since the controller could be unique for the drive. The host need only to ask for a particular sector, or block, to be read or written, and either accept the data from the drive or send the data to it. The interface used by these drives was standardized in 1994 as ANSI standard X3.221-1994, AT Attachment Interface for Disk Drives. After later versions of the standard were developed, this became known as "ATA-1". A short-lived, seldom-used implementation of ATA was created for the IBM XT and similar machines that used the 8-bit version of the ISA bus. It has been referred to as "XT-IDE", "XTA" or "XT Attachment". EIDE and ATA-2 In 1994, about the same time that the ATA-1 standard was adopted, Western Digital introduced drives under a newer name, Enhanced IDE (EIDE). These included most of the features of the forthcoming ATA-2 specification and several additional enhancements. Other manufacturers introduced their own variations of ATA-1 such as "Fast ATA" and "Fast ATA-2". The new version of the ANSI standard, AT Attachment Interface with Extensions ATA-2 (X3.279-1996), was approved in 1996. It included most of the features of the manufacturer-specific variants. ATA-2 also was the first to note that devices other than hard drives could be attached to the interface: ATAPI ATA was originally designed for, and worked only with, hard disk drives and devices that could emulate them. The introduction of ATAPI (ATA Packet Interface) by a group called the Small Form Factor committee (SFF) allowed ATA to be used for a variety of other devices that require functions beyond those necessary for hard disk drives. For example, any removable media device needs a "media eject" command, and a way for the host to determine whether the media is present, and these were not provided in the ATA protocol. ATAPI is a protocol allowing the ATA interface to carry SCSI commands and responses; therefore, all ATAPI devices are actually "speaking SCSI" other than at the electrical interface. The SCSI commands and responses are embedded in "packets" (hence "ATA Packet Interface") for transmission on the ATA cable. This allows any device class for which a SCSI command set has been defined to be interfaced via ATA/ATAPI. ATAPI devices are also "speaking ATA", as the ATA physical interface and protocol are still being used to send the packets. On the other hand, ATA hard drives and solid state drives do not use ATAPI. ATAPI devices include CD-ROM and DVD-ROM drives, tape drives, and large-capacity floppy drives such as the Zip drive and SuperDisk drive. Some early ATAPI devices were simply SCSI devices with an ATA/ATAPI to SCSI protocol converter added on. The SCSI commands and responses used by each class of ATAPI device (CD-ROM, tape, etc.) are described in other documents or specifications specific to those device classes and are not within ATA/ATAPI or the T13 committee's purview. One commonly used set is defined in the MMC SCSI command set. ATAPI was adopted as part of ATA in INCITS 317-1998, AT Attachment with Packet Interface Extension (ATA/ATAPI-4). UDMA and ATA-4 The ATA/ATAPI-4 standard also introduced several "Ultra DMA" transfer modes. These initially supported speeds from 16 to 33 MB/s. In later versions, faster Ultra DMA modes were added, requiring new 80-wire cables to reduce crosstalk. The latest versions of Parallel ATA support up to 133 MB/s. Ultra ATA Ultra ATA, abbreviated UATA, is a designation that has been primarily used by Western Digital for different speed enhancements to the ATA/ATAPI standards. For example, in 2000 Western Digital published a document describing "Ultra ATA/100", which brought performance improvements for the then-current ATA/ATAPI-5 standard by improving maximum speed of the Parallel ATA interface from 66 to 100 MB/s. Most of Western Digital's changes, along with others, were included in the ATA/ATAPI-6 standard (2002). x86 BIOS size limitations Initially, the size of an ATA drive was stored in the system x86 BIOS using a type number (1 through 45) that predefined the C/H/S parameters and also often the landing zone, in which the drive heads are parked while not in use. Later, a "user definable" format called C/H/S or cylinders, heads, sectors was made available. These numbers were important for the earlier ST-506 interface, but were generally meaningless for ATA—the CHS parameters for later ATA large drives often specified impossibly high numbers of heads or sectors that did not actually define the internal physical layout of the drive at all. From the start, and up to ATA-2, every user had to specify explicitly how large every attached drive was. From ATA-2 on, an "identify drive" command was implemented that can be sent and which will return all drive parameters. Owing to a lack of foresight by motherboard manufacturers, the system BIOS was often hobbled by artificial C/H/S size limitations due to the manufacturer assuming certain values would never exceed a particular numerical maximum. The first of these BIOS limits occurred when ATA drives reached sizes in excess of 504 MiB, because some motherboard BIOSes would not allow C/H/S values above 1024 cylinders, 16 heads, and 63 sectors. Multiplied by 512 bytes per sector, this totals bytes which, divided by bytes per MiB, equals 504 MiB (528 MB). The second of these BIOS limitations occurred at 1024 cylinders, 256 heads, and 63 sectors, and a problem in MS-DOS limited the number of heads to 255. This totals to bytes (8032.5 MiB), commonly referred to as the 8.4 gigabyte barrier. This is again a limit imposed by x86 BIOSes, and not a limit imposed by the ATA interface. It was eventually determined that these size limitations could be overridden with a small program loaded at startup from a hard drive's boot sector. Some hard drive manufacturers, such as Western Digital, started including these override utilities with large hard drives to help overcome these problems. However, if the computer was booted in some other manner without loading the special utility, the invalid BIOS settings would be used and the drive could either be inaccessible or appear to the operating system to be damaged. Later, an extension to the x86 BIOS disk services called the "Enhanced Disk Drive" (EDD) was made available, which makes it possible to address drives as large as 264 sectors. Interface size limitations The first drive interface used 22-bit addressing mode which resulted in a maximum drive capacity of two gigabytes. Later, the first formalized ATA specification used a 28-bit addressing mode through LBA28, allowing for the addressing of 228 () sectors (blocks) of 512 bytes each, resulting in a maximum capacity of 128 GiB (137 GB). ATA-6 introduced 48-bit addressing, increasing the limit to 128 PiB (144 PB). As a consequence, any ATA drive of capacity larger than about 137 GB must be an ATA-6 or later drive. Connecting such a drive to a host with an ATA-5 or earlier interface will limit the usable capacity to the maximum of the interface. Some operating systems, including Windows XP pre-SP1, and Windows 2000 pre-SP3, disable LBA48 by default, requiring the user to take extra steps to use the entire capacity of an ATA drive larger than about 137 gigabytes. Older operating systems, such as Windows 98, do not support 48-bit LBA at all. However, members of the third-party group MSFN have modified the Windows 98 disk drivers to add unofficial support for 48-bit LBA to Windows 95 OSR2, Windows 98, Windows 98 SE and Windows ME. Some 16-bit and 32-bit operating systems supporting LBA48 may still not support disks larger than 2 TiB due to using 32-bit arithmetic only; a limitation also applying to many boot sectors. Primacy and obsolescence Parallel ATA (then simply called ATA or IDE) became the primary storage device interface for PCs soon after its introduction. In some systems, a third and fourth motherboard interface was provided, allowing up to eight ATA devices to be attached to the motherboard. Often, these additional connectors were implemented by inexpensive RAID controllers. Soon after the introduction of Serial ATA (SATA) in 2003, use of Parallel ATA declined. Some PCs and laptops of the era have a SATA hard disk and an optical drive connected to PATA. As of 2007, some PC chipsets, for example the Intel ICH10, had removed support for PATA. Motherboard vendors still wishing to offer Parallel ATA with those chipsets must include an additional interface chip. In more recent computers, the Parallel ATA interface is rarely used even if present, as four or more Serial ATA connectors are usually provided on the motherboard and SATA devices of all types are common. With Western Digital's withdrawal from the PATA market, hard disk drives with the PATA interface were no longer in production after December 2013 for other than specialty applications. Interface Parallel ATA cables transfer data 16 bits at a time. The traditional cable uses 40-pin female insulation displacement connectors (IDC) attached to a 40- or 80-conductor ribbon cable. Each cable has two or three connectors, one of which plugs into a host adapter interfacing with the rest of the computer system. The remaining connector(s) plug into storage devices, most commonly hard disk drives or optical drives. Each connector has 39 physical pins arranged into two rows (2.54 mm, -inch pitch), with a gap or key at pin 20. Earlier connectors may not have that gap, with all 40 pins available. Thus, later cables with the gap filled in are incompatible with earlier connectors, although earlier cables are compatible with later connectors. Round parallel ATA cables (as opposed to ribbon cables) were eventually made available for 'case modders' for cosmetic reasons, as well as claims of improved computer cooling and were easier to handle; however, only ribbon cables are supported by the ATA specifications. Pin 20 In the ATA standard, pin 20 is defined as a mechanical key and is not used. The pin's socket on the female connector is often blocked, requiring pin 20 to be omitted from the male cable or drive connector; it is thus impossible to plug it in the wrong way round. However, some flash memory drives can use pin 20 as VCC_in to power the drive without requiring a special power cable; this feature can only be used if the equipment supports this use of pin 20. Pin 28 Pin 28 of the gray (slave/middle) connector of an 80-conductor cable is not attached to any conductor of the cable. It is attached normally on the black (master drive end) and blue (motherboard end) connectors. This enables cable select functionality. Pin 34 Pin 34 is connected to ground inside the blue connector of an 80-conductor cable but not attached to any conductor of the cable, allowing for detection of such a cable. It is attached normally on the gray and black connectors. 44-pin variant A 44-pin variant PATA connector is used for 2.5 inch drives inside laptops. The pins are closer together (2.0 mm pitch) and the connector is physically smaller than the 40-pin connector. The extra pins carry power. 80-conductor variant ATA's cables have had 40 conductors for most of its history (44 conductors for the smaller form-factor version used for 2.5" drives—the extra four for power), but an 80-conductor version appeared with the introduction of the UDMA/66 mode. All of the additional conductors in the new cable are grounds, interleaved with the signal conductors to reduce the effects of capacitive coupling between neighboring signal conductors, reducing crosstalk. Capacitive coupling is more of a problem at higher transfer rates, and this change was necessary to enable the 66 megabytes per second (MB/s) transfer rate of UDMA4 to work reliably. The faster UDMA5 and UDMA6 modes also require 80-conductor cables. Though the number of conductors doubled, the number of connector pins and the pinout remain the same as 40-conductor cables, and the external appearance of the connectors is identical. Internally, the connectors are different; the connectors for the 80-conductor cable connect a larger number of ground conductors to the ground pins, while the connectors for the 40-conductor cable connect ground conductors to ground pins one-to-one. 80-conductor cables usually come with three differently colored connectors (blue, black, and gray for controller, master drive, and slave drive respectively) as opposed to uniformly colored 40-conductor cable's connectors (commonly all gray). The gray connector on 80-conductor cables has pin 28 CSEL not connected, making it the slave position for drives configured cable select. Multiple devices on a cable If two devices are attached to a single cable, one must be designated as Device 0 (in the past, commonly designated master) and the other as Device 1 (in the past, commonly designated as slave). This distinction is necessary to allow both drives to share the cable without conflict. The Device 0 drive is the drive that usually appears "first" to the computer's BIOS and/or operating system. In most personal computers the drives are often designated as "C:" for the Device 0 and "D:" for the Device 1 referring to one active primary partitions on each. The mode that a device must use is often set by a jumper setting on the device itself, which must be manually set to Device 0 (Master) or Device 1 (Slave). If there is a single device on a cable, it should be configured as Device 0. However, some certain era drives have a special setting called Single for this configuration (Western Digital, in particular). Also, depending on the hardware and software available, a Single drive on a cable will often work reliably even though configured as the Device 1 drive (most often seen where an optical drive is the only device on the secondary ATA interface). The words primary and secondary typically refers to the two IDE cables, which can have two drives each (primary master, primary slave, secondary master, secondary slave). There are many debates about how much a slow device can impact the performance of a faster device on the same cable. On early ATA host adapters, both devices' data transfers can be constrained to the speed of the slower device, if two devices of different speed capabilities are on the same cable. For all modern ATA host adapters, this is not true, as modern ATA host adapters support independent device timing. This allows each device on the cable to transfer data at its own best speed. Even with earlier adapters without independent timing, this effect applies only to the data transfer phase of a read or write operation. This is caused by the omission of both overlapped and queued feature sets from most parallel ATA products. Only one device on a cable can perform a read or write operation at one time; therefore, a fast device on the same cable as a slow device under heavy use will find it has to wait for the slow device to complete its task first. However, most modern devices will report write operations as complete once the data is stored in their onboard cache memory, before the data is written to the (slow) magnetic storage. This allows commands to be sent to the other device on the cable, reducing the impact of the "one operation at a time" limit. The impact of this on a system's performance depends on the application. For example, when copying data from an optical drive to a hard drive (such as during software installation), this effect probably will not matter. Such jobs are necessarily limited by the speed of the optical drive no matter where it is. But if the hard drive in question is also expected to provide good throughput for other tasks at the same time, it probably should not be on the same cable as the optical drive. Cable select A drive mode called cable select was described as optional in ATA-1 and has come into fairly widespread use with ATA-5 and later. A drive set to "cable select" automatically configures itself as Device 0 or Device 1, according to its position on the cable. Cable select is controlled by pin 28. The host adapter grounds this pin; if a device sees that the pin is grounded, it becomes the Device 0 (master) device; if it sees that pin 28 is open, the device becomes the Device 1 (slave) device. This setting is usually chosen by a jumper setting on the drive called "cable select", usually marked CS, which is separate from the Device 0/1 setting. If two drives are configured as Device 0 and Device 1 manually, this configuration does not need to correspond to their position on the cable. Pin 28 is only used to let the drives know their position on the cable; it is not used by the host when communicating with the drives. In other words, the manual master/slave setting using jumpers on the drives takes precedence and allows them to be freely placed on either connector of the ribbon cable. With the 40-conductor cable, it was very common to implement cable select by simply cutting the pin 28 wire between the two device connectors; putting the slave Device 1 device at the end of the cable, and the master Device 0 on the middle connector. This arrangement eventually was standardized in later versions. However, it had one drawback: if there is just one master device on a 2-drive cable, using the middle connector, this results in an unused stub of cable, which is undesirable for physical convenience and electrical reasons. The stub causes signal reflections, particularly at higher transfer rates. Starting with the 80-conductor cable defined for use in ATAPI5/UDMA4, the master Device 0 device goes at the far-from-the-host end of the cable on the black connector, the slave Device 1 goes on the grey middle connector, and the blue connector goes to the host (e.g. motherboard IDE connector, or IDE card). So, if there is only one (Device 0) device on a two-drive cable, using the black connector, there is no cable stub to cause reflections (the unused connector is now in the middle of the ribbon). Also, cable select is now implemented in the grey middle device connector, usually simply by omitting the pin 28 contact from the connector body. Serialized, overlapped, and queued operations The parallel ATA protocols up through ATA-3 require that once a command has been given on an ATA interface, it must complete before any subsequent command may be given. Operations on the devices must be serializedwith only one operation in progress at a timewith respect to the ATA host interface. A useful mental model is that the host ATA interface is busy with the first request for its entire duration, and therefore can not be told about another request until the first one is complete. The function of serializing requests to the interface is usually performed by a device driver in the host operating system. The ATA-4 and subsequent versions of the specification have included an "overlapped feature set" and a "queued feature set" as optional features, both being given the name "Tagged Command Queuing" (TCQ), a reference to a set of features from SCSI which the ATA version attempts to emulate. However, support for these is extremely rare in actual parallel ATA products and device drivers because these feature sets were implemented in such a way as to maintain software compatibility with its heritage as originally an extension of the ISA bus. This implementation resulted in excessive CPU utilization which largely negated the advantages of command queuing. By contrast, overlapped and queued operations have been common in other storage buses; in particular, SCSI's version of tagged command queuing had no need to be compatible with APIs designed for ISA, allowing it to attain high performance with low overhead on buses which supported first party DMA like PCI. This has long been seen as a major advantage of SCSI. The Serial ATA standard has supported native command queueing (NCQ) since its first release, but it is an optional feature for both host adapters and target devices. Many obsolete PC motherboards do not support NCQ, but modern SATA hard disk drives and SATA solid-state drives usually support NCQ, which is not the case for removable (CD/DVD) drives because the ATAPI command set used to control them prohibits queued operations. HDD passwords and security ATA devices may support an optional security feature which is defined in an ATA specification, and thus not specific to any brand or device. The security feature can be enabled and disabled by sending special ATA commands to the drive. If a device is locked, it will refuse all access until it is unlocked. A device can have two passwords: A User Password and a Master Password; either or both may be set. There is a Master Password identifier feature which, if supported and used, can identify the current Master Password (without disclosing it). The master password, if set, can used by the administrator to reset user password, if the end user forgot the user password. On some laptops and some business computers, their BIOS can control the ATA passwords. A device can be locked in two modes: High security mode or Maximum security mode. Bit 8 in word 128 of the IDENTIFY response shows which mode the disk is in: 0 = High, 1 = Maximum. In High security mode, the device can be unlocked with either the User or Master password, using the "SECURITY UNLOCK DEVICE" ATA command. There is an attempt limit, normally set to 5, after which the disk must be power cycled or hard-reset before unlocking can be attempted again. Also in High security mode, the SECURITY ERASE UNIT command can be used with either the User or Master password. In Maximum security mode, the device can be unlocked only with the User password. If the User password is not available, the only remaining way to get at least the bare hardware back to a usable state is to issue the SECURITY ERASE PREPARE command, immediately followed by SECURITY ERASE UNIT. In Maximum security mode, the SECURITY ERASE UNIT command requires the Master password and will completely erase all data on the disk. Word 89 in the IDENTIFY response indicates how long the operation will take. While the ATA lock is intended to be impossible to defeat without a valid password, there are purported workarounds to unlock a device. For NVMe drives, the security features, including lock passwords, were defined in the OPAL standard. For sanitizing entire disks, the built-in Secure Erase command is effective when implemented correctly. There have been a few reported instances of failures to erase some or all data. On some laptops and some business computers, their BIOS can utilize Secure Erase to erase all data of the disk. External parallel ATA devices Due to a short cable length specification and shielding issues it is extremely uncommon to find external PATA devices that directly use PATA for connection to a computer. A device connected externally needs additional cable length to form a U-shaped bend so that the external device may be placed alongside, or on top of the computer case, and the standard cable length is too short to permit this. For ease of reach from motherboard to device, the connectors tend to be positioned towards the front edge of motherboards, for connection to devices protruding from the front of the computer case. This front-edge position makes extension out the back to an external device even more difficult. Ribbon cables are poorly shielded, and the standard relies upon the cabling to be installed inside a shielded computer case to meet RF emissions limits. External hard disk drives or optical disk drives that have an internal PATA interface, use some other interface technology to bridge the distance between the external device and the computer. USB is the most common external interface, followed by Firewire. A bridge chip inside the external devices converts from the USB interface to PATA, and typically only supports a single external device without cable select or master/slave. Specifications The following table shows the names of the versions of the ATA standards and the transfer modes and rates supported by each. Note that the transfer rate for each mode (for example, 66.7 MB/s for UDMA4, commonly called "Ultra-DMA 66", defined by ATA-5) gives its maximum theoretical transfer rate on the cable. This is simply two bytes multiplied by the effective clock rate, and presumes that every clock cycle is used to transfer end-user data. In practice, of course, protocol overhead reduces this value. Congestion on the host bus to which the ATA adapter is attached may also limit the maximum burst transfer rate. For example, the maximum data transfer rate for conventional PCI bus is 133 MB/s, and this is shared among all active devices on the bus. In addition, no ATA hard drives existed in 2005 that were capable of measured sustained transfer rates of above 80 MB/s. Furthermore, sustained transfer rate tests do not give realistic throughput expectations for most workloads: They use I/O loads specifically designed to encounter almost no delays from seek time or rotational latency. Hard drive performance under most workloads is limited first and second by those two factors; the transfer rate on the bus is a distant third in importance. Therefore, transfer speed limits above 66 MB/s really affect performance only when the hard drive can satisfy all I/O requests by reading from its internal cache—a very unusual situation, especially considering that such data is usually already buffered by the operating system. , mechanical hard disk drives can transfer data at up to 524 MB/s, which is far beyond the capabilities of the PATA/133 specification. High-performance solid state drives can transfer data at up to 7000–7500 MB/s. Only the Ultra DMA modes use CRC to detect errors in data transfer between the controller and drive. This is a 16-bit CRC, and it is used for data blocks only. Transmission of command and status blocks do not use the fast signaling methods that would necessitate CRC. For comparison, in Serial ATA, 32-bit CRC is used for both commands and data. Features introduced with each ATA revision Speed of defined transfer modes Related standards, features, and proposals ATAPI Removable Media Device (ARMD) ATAPI devices with removable media, other than CD and DVD drives, are classified as ARMD (ATAPI Removable Media Device) and can appear as either a super-floppy (non-partitioned media) or a hard drive (partitioned media) to the operating system. These can be supported as bootable devices by a BIOS complying with the ATAPI Removable Media Device BIOS Specification, originally developed by Compaq Computer Corporation and Phoenix Technologies. It specifies provisions in the BIOS of a personal computer to allow the computer to be bootstrapped from devices such as Zip drives, Jaz drives, SuperDisk (LS-120) drives, and similar devices. These devices have removable media like floppy disk drives, but capacities more commensurate with hard drives, and programming requirements unlike either. Due to limitations in the floppy controller interface most of these devices were ATAPI devices, connected to one of the host computer's ATA interfaces, similarly to a hard drive or CD-ROM device. However, existing BIOS standards did not support these devices. An ARMD-compliant BIOS allows these devices to be booted from and used under the operating system without requiring device-specific code in the OS. A BIOS implementing ARMD allows the user to include ARMD devices in the boot search order. Usually an ARMD device is configured earlier in the boot order than the hard drive. Similarly to a floppy drive, if bootable media is present in the ARMD drive, the BIOS will boot from it; if not, the BIOS will continue in the search order, usually with the hard drive last. There are two variants of ARMD, ARMD-FDD and ARMD-HDD. Originally ARMD caused the devices to appear as a sort of very large floppy drive, either the primary floppy drive device 00h or the secondary device 01h. Some operating systems required code changes to support floppy disks with capacities far larger than any standard floppy disk drive. Also, standard-floppy disk drive emulation proved to be unsuitable for certain high-capacity floppy disk drives such as Iomega Zip drives. Later the ARMD-HDD, ARMD-"Hard disk device", variant was developed to address these issues. Under ARMD-HDD, an ARMD device appears to the BIOS and the operating system as a hard drive. ATA over Ethernet In August 2004, Sam Hopkins and Brantley Coile of Coraid specified a lightweight ATA over Ethernet protocol to carry ATA commands over Ethernet instead of directly connecting them to a PATA host adapter. This permitted the established block protocol to be reused in storage area network (SAN) applications. Compact Flash Compact Flash in its IDE mode is essentially a miniaturized ATA interface, intended for use on devices that use flash memory storage. No interfacing chips or circuitry are required, other than to directly adapt the smaller CF socket onto the larger ATA connector. (Although most CF cards only support IDE mode up to PIO4, making them much slower in IDE mode than their CF capable speed) The ATA connector specification does not include pins for supplying power to a CF device, so power is inserted into the connector from a separate source. The exception to this is when the CF device is connected to a 44-pin ATA bus designed for 2.5-inch hard disk drives, commonly found in notebook computers, as this bus implementation must provide power to a standard hard disk drive. CF devices can be designated as devices 0 or 1 on an ATA interface, though since most CF devices offer only a single socket, it is not necessary to offer this selection to end users. Although CF can be hot-pluggable with additional design methods, by default when wired directly to an ATA interface, it is not intended to be hot-pluggable. See also References External links CE-ATA Workgroup AT Attachment Computer storage buses Computer connectors Computer hardware standards
Parallel ATA
[ "Technology" ]
7,217
[ "Computer standards", "Computer hardware standards" ]
2,787
https://en.wikipedia.org/wiki/Astrobiology
Astrobiology (also xenology or exobiology) is a scientific field within the life and environmental sciences that studies the origins, early evolution, distribution, and future of life in the universe by investigating its deterministic conditions and contingent events. As a discipline, astrobiology is founded on the premise that life may exist beyond Earth. Research in astrobiology comprises three main areas: the study of habitable environments in the Solar System and beyond, the search for planetary biosignatures of past or present extraterrestrial life, and the study of the origin and early evolution of life on Earth. The field of astrobiology has its origins in the 20th century with the advent of space exploration and the discovery of exoplanets. Early astrobiology research focused on the search for extraterrestrial life and the study of the potential for life to exist on other planets. In the 1960s and 1970s, NASA began its astrobiology pursuits within the Viking program, which was the first US mission to land on Mars and search for signs of life. This mission, along with other early space exploration missions, laid the foundation for the development of astrobiology as a discipline. Regarding habitable environments, astrobiology investigates potential locations beyond Earth that could support life, such as Mars, Europa, and exoplanets, through research into the extremophiles populating austere environments on Earth, like volcanic and deep sea environments. Research within this topic is conducted utilising the methodology of the geosciences, especially geobiology, for astrobiological applications. The search for biosignatures involves the identification of signs of past or present life in the form of organic compounds, isotopic ratios, or microbial fossils. Research within this topic is conducted utilising the methodology of planetary and environmental science, especially atmospheric science, for astrobiological applications, and is often conducted through remote sensing and in situ missions. Astrobiology also concerns the study of the origin and early evolution of life on Earth to try to understand the conditions that are necessary for life to form on other planets. This research seeks to understand how life emerged from non-living matter and how it evolved to become the diverse array of organisms we see today. Research within this topic is conducted utilising the methodology of paleosciences, especially paleobiology, for astrobiological applications. Astrobiology is a rapidly developing field with a strong interdisciplinary aspect that holds many challenges and opportunities for scientists. Astrobiology programs and research centres are present in many universities and research institutions around the world, and space agencies like NASA and ESA have dedicated departments and programs for astrobiology research. Overview The term astrobiology was first proposed by the Russian astronomer Gavriil Tikhov in 1953. It is etymologically derived from the Greek , "star"; , "life"; and , -logia, "study". A close synonym is exobiology from the Greek Έξω, "external"; , "life"; and , -logia, "study", coined by American molecular biologist Joshua Lederberg; exobiology is considered to have a narrow scope limited to search of life external to Earth. Another associated term is xenobiology, from the Greek ξένος, "foreign"; , "life"; and -λογία, "study", coined by American science fiction writer Robert Heinlein in his work The Star Beast; xenobiology is now used in a more specialised sense, referring to 'biology based on foreign chemistry', whether of extraterrestrial or terrestrial (typically synthetic) origin. While the potential for extraterrestrial life, especially intelligent life, has been explored throughout human history within philosophy and narrative, the question is a verifiable hypothesis and thus a valid line of scientific inquiry; planetary scientist David Grinspoon calls it a field of natural philosophy, grounding speculation on the unknown in known scientific theory. The modern field of astrobiology can be traced back to the 1950s and 1960s with the advent of space exploration, when scientists began to seriously consider the possibility of life on other planets. In 1957, the Soviet Union launched Sputnik 1, the first artificial satellite, which marked the beginning of the Space Age. This event led to an increase in the study of the potential for life on other planets, as scientists began to consider the possibilities opened up by the new technology of space exploration. In 1959, NASA funded its first exobiology project, and in 1960, NASA founded the Exobiology Program, now one of four main elements of NASA's current Astrobiology Program. In 1971, NASA funded Project Cyclops, part of the search for extraterrestrial intelligence, to search radio frequencies of the electromagnetic spectrum for interstellar communications transmitted by extraterrestrial life outside the Solar System. In the 1960s-1970s, NASA established the Viking program, which was the first US mission to land on Mars and search for metabolic signs of present life; the results were inconclusive. In the 1980s and 1990s, the field began to expand and diversify as new discoveries and technologies emerged. The discovery of microbial life in extreme environments on Earth, such as deep-sea hydrothermal vents, helped to clarify the feasibility of potential life existing in harsh conditions. The development of new techniques for the detection of biosignatures, such as the use of stable isotopes, also played a significant role in the evolution of the field. The contemporary landscape of astrobiology emerged in the early 21st century, focused on utilising Earth and environmental science for applications within comparate space environments. Missions included the ESA's Beagle 2, which failed minutes after landing on Mars, NASA's Phoenix lander, which probed the environment for past and present planetary habitability of microbial life on Mars and researched the history of water, and NASA's Curiosity rover, currently probing the environment for past and present planetary habitability of microbial life on Mars. Theoretical foundations Planetary habitability Astrobiological research makes a number of simplifying assumptions when studying the necessary components for planetary habitability. Carbon and Organic Compounds: Carbon is the fourth most abundant element in the universe and the energy required to make or break a bond is at just the appropriate level for building molecules which are not only stable, but also reactive. The fact that carbon atoms bond readily to other carbon atoms allows for the building of extremely long and complex molecules. As such, astrobiological research presumes that the vast majority of life forms in the Milky Way galaxy are based on carbon chemistries, as are all life forms on Earth. However, theoretical astrobiology entertains the potential for other organic molecular bases for life, thus astrobiological research often focuses on identifying environments that have the potential to support life based on the presence of organic compounds. Liquid water: Liquid water is a common molecule that provides an excellent environment for the formation of complicated carbon-based molecules, and is generally considered necessary for life as we know it to exist. Thus, astrobiological research presumes that extraterrestrial life similarly depends upon access to liquid water, and often focuses on identifying environments that have the potential to support liquid water. Some researchers posit environments of water-ammonia mixtures as possible solvents for hypothetical types of biochemistry. Environmental stability: Where organisms adaptively evolve to the conditions of the environments in which they reside, environmental stability is considered necessary for life to exist. This presupposes the necessity of a stable temperature, pressure, and radiation levels; resultantly, astrobiological research focuses on planets orbiting Sun-like red dwarf stars. This is because very large stars have relatively short lifetimes, meaning that life might not have time to emerge on planets orbiting them; very small stars provide so little heat and warmth that only planets in very close orbits around them would not be frozen solid, and in such close orbits these planets would be tidally locked to the star; whereas the long lifetimes of red dwarfs could allow the development of habitable environments on planets with thick atmospheres. This is significant as red dwarfs are extremely common. (See also: Habitability of red dwarf systems). Energy source: It is assumed that any life elsewhere in the universe would also require an energy source. Previously, it was assumed that this would necessarily be from a sun-like star, however with developments within extremophile research contemporary astrobiological research often focuses on identifying environments that have the potential to support life based on the availability of an energy source, such as the presence of volcanic activity on a planet or moon that could provide a source of heat and energy. It is important to note that these assumptions are based on our current understanding of life on Earth and the conditions under which it can exist. As our understanding of life and the potential for it to exist in different environments evolves, these assumptions may change. Methods Studying terrestrial extremophiles Astrobiological research concerning the study of habitable environments in our solar system and beyond utilises methods within the geosciences. Research within this branch primarily concerns the geobiology of organisms that can survive in extreme environments on Earth, such as in volcanic or deep sea environments, to understand the limits of life, and the conditions under which life might be able to survive on other planets. This includes, but is not limited to: Deep-sea extremophiles: Researchers are studying organisms that live in the extreme environments of deep-sea hydrothermal vents and cold seeps. These organisms survive in the absence of sunlight, and some are able to survive in high temperatures and pressures, and use chemical energy instead of sunlight to produce food. Desert extremophiles: Researchers are studying organisms that can survive in extreme dry, high temperature conditions, such as in deserts. Microbes in extreme environments: Researchers are investigating the diversity and activity of microorganisms in environments such as deep mines, subsurface soil, cold glaciers and polar ice, and high-altitude environments. Researching Earth's present environment Research also regards the long-term survival of life on Earth, and the possibilities and hazards of life on other planets, including: Biodiversity and ecosystem resilience: Scientists are studying how the diversity of life and the interactions between different species contribute to the resilience of ecosystems and their ability to recover from disturbances. Climate change and extinction: Researchers are investigating the impacts of climate change on different species and ecosystems, and how they may lead to extinction or adaptation. This includes the evolution of Earth's climate and geology, and their potential impact on the habitability of the planet in the future, especially for humans. Human impact on the biosphere: Scientists are studying the ways in which human activities, such as deforestation, pollution, and the introduction of invasive species, are affecting the biosphere and the long-term survival of life on Earth. Long-term preservation of life: Researchers are exploring ways to preserve samples of life on Earth for long periods of time, such as cryopreservation and genomic preservation, in the event of a catastrophic event that could wipe out most of life on Earth. Finding biosignatures on other worlds Emerging astrobiological research concerning the search for planetary biosignatures of past or present extraterrestrial life utilise methodologies within planetary sciences. These include: The study of microbial life in the subsurface of Mars: Scientists are using data from Mars rover missions to study the composition of the subsurface of Mars, searching for biosignatures of past or present microbial life. The study of liquid bodies on icy moons: Discoveries of surface and subsurface bodies of liquid on moons such as Europa, Titan and Enceladus showed possible habitability zones, making them viable targets for the search for extraterrestrial life. , missions like Europa Clipper and Dragonfly are planned to search for biosignatures within these environments. The study of the atmospheres of planets: Scientists are studying the potential for life to exist in the atmospheres of planets, with a focus on the study of the physical and chemical conditions necessary for such life to exist, namely the detection of organic molecules and biosignature gases; for example, the study of the possibility of life in the atmospheres of exoplanets that orbit red dwarfs and the study of the potential for microbial life in the upper atmosphere of Venus. Telescopes and remote sensing of exoplanets: The discovery of thousands of exoplanets has opened up new opportunities for the search for biosignatures. Scientists are using telescopes such as the James Webb Space Telescope and the Transiting Exoplanet Survey Satellite to search for biosignatures on exoplanets. They are also developing new techniques for the detection of biosignatures, such as the use of remote sensing to search for biosignatures in the atmosphere of exoplanets. Talking to extraterrestrials SETI and CETI: Scientists search for signals from intelligent extraterrestrial civilizations using radio and optical telescopes within the discipline of extraterrestrial intelligence communications (CETI). CETI focuses on composing and deciphering messages that could theoretically be understood by another technological civilization. Communication attempts by humans have included broadcasting mathematical languages, pictorial systems such as the Arecibo message, and computational approaches to detecting and deciphering 'natural' language communication. While some high-profile scientists, such as Carl Sagan, have advocated the transmission of messages, theoretical physicist Stephen Hawking warned against it, suggesting that aliens may raid Earth for its resources. Investigating the early Earth Emerging astrobiological research concerning the study of the origin and early evolution of life on Earth utilises methodologies within the palaeosciences. These include: The study of the early atmosphere: Researchers are investigating the role of the early atmosphere in providing the right conditions for the emergence of life, such as the presence of gases that could have helped to stabilise the climate and the formation of organic molecules. The study of the early magnetic field: Researchers are investigating the role of the early magnetic field in protecting the Earth from harmful radiation and helping to stabilise the climate. This research has immense astrobiological implications where the subjects of current astrobiological research like Mars lack such a field. The study of prebiotic chemistry: Scientists are studying the chemical reactions that could have occurred on the early Earth that led to the formation of the building blocks of life- amino acids, nucleotides, and lipids- and how these molecules could have formed spontaneously under early Earth conditions. The study of impact events: Scientists are investigating the potential role of impact events- especially meteorites- in the delivery of water and organic molecules to early Earth. The study of the primordial soup: Researchers are investigating the conditions and ingredients that were present on the early Earth that could have led to the formation of the first living organisms, such as the presence of water and organic molecules, and how these ingredients could have led to the formation of the first living organisms. This includes the role of water in the formation of the first cells and in catalysing chemical reactions. The study of the role of minerals: Scientists are investigating the role of minerals like clay in catalysing the formation of organic molecules, thus playing a role in the emergence of life on Earth. The study of the role of energy and electricity: Scientists are investigating the potential sources of energy and electricity that could have been available on the early Earth, and their role in the formation of organic molecules, thus the emergence of life. The study of the early oceans: Scientists are investigating the composition and chemistry of the early oceans and how it may have played a role in the emergence of life, such as the presence of dissolved minerals that could have helped to catalyse the formation of organic molecules. The study of hydrothermal vents: Scientists are investigating the potential role of hydrothermal vents in the origin of life, as these environments may have provided the energy and chemical building blocks needed for its emergence. The study of plate tectonics: Scientists are investigating the role of plate tectonics in creating a diverse range of environments on the early Earth. The study of the early biosphere: Researchers are investigating the diversity and activity of microorganisms in the early Earth, and how these organisms may have played a role in the emergence of life. The study of microbial fossils: Scientists are investigating the presence of microbial fossils in ancient rocks, which can provide clues about the early evolution of life on Earth and the emergence of the first organisms. Research The systematic search for possible life outside Earth is a valid multidisciplinary scientific endeavor. However, hypotheses and predictions as to its existence and origin vary widely, and at the present, the development of hypotheses firmly grounded on science may be considered astrobiology's most concrete practical application. It has been proposed that viruses are likely to be encountered on other life-bearing planets, and may be present even if there are no biological cells. Research outcomes , no evidence of extraterrestrial life has been identified. Examination of the Allan Hills 84001 meteorite, which was recovered in Antarctica in 1984 and originated from Mars, is thought by David McKay, as well as few other scientists, to contain microfossils of extraterrestrial origin; this interpretation is controversial. Yamato 000593, the second largest meteorite from Mars, was found on Earth in 2000. At a microscopic level, spheres are found in the meteorite that are rich in carbon compared to surrounding areas that lack such spheres. The carbon-rich spheres may have been formed by biotic activity according to some NASA scientists. On 5 March 2011, Richard B. Hoover, a scientist with the Marshall Space Flight Center, speculated on the finding of alleged microfossils similar to cyanobacteria in CI1 carbonaceous meteorites in the fringe Journal of Cosmology, a story widely reported on by mainstream media. However, NASA formally distanced itself from Hoover's claim. According to American astrophysicist Neil deGrasse Tyson: "At the moment, life on Earth is the only known life in the universe, but there are compelling arguments to suggest we are not alone." Elements of astrobiology Astronomy Most astronomy-related astrobiology research falls into the category of extrasolar planet (exoplanet) detection, the hypothesis being that if life arose on Earth, then it could also arise on other planets with similar characteristics. To that end, a number of instruments designed to detect Earth-sized exoplanets have been considered, most notably NASA's Terrestrial Planet Finder (TPF) and ESA's Darwin programs, both of which have been cancelled. NASA launched the Kepler mission in March 2009, and the French Space Agency launched the COROT space mission in 2006. There are also several less ambitious ground-based efforts underway. The goal of these missions is not only to detect Earth-sized planets but also to directly detect light from the planet so that it may be studied spectroscopically. By examining planetary spectra, it would be possible to determine the basic composition of an extrasolar planet's atmosphere and/or surface. Given this knowledge, it may be possible to assess the likelihood of life being found on that planet. A NASA research group, the Virtual Planet Laboratory, is using computer modeling to generate a wide variety of virtual planets to see what they would look like if viewed by TPF or Darwin. It is hoped that once these missions come online, their spectra can be cross-checked with these virtual planetary spectra for features that might indicate the presence of life. An estimate for the number of planets with intelligent communicative extraterrestrial life can be gleaned from the Drake equation, essentially an equation expressing the probability of intelligent life as the product of factors such as the fraction of planets that might be habitable and the fraction of planets on which life might arise: where: N = The number of communicative civilizations R* = The rate of formation of suitable stars (stars such as the Sun) fp = The fraction of those stars with planets (current evidence indicates that planetary systems may be common for stars like the Sun) ne = The number of Earth-sized worlds per planetary system fl = The fraction of those Earth-sized planets where life actually develops fi = The fraction of life sites where intelligence develops fc = The fraction of communicative planets (those on which electromagnetic communications technology develops) L = The "lifetime" of communicating civilizations However, whilst the rationale behind the equation is sound, it is unlikely that the equation will be constrained to reasonable limits of error any time soon. The problem with the formula is that it is not used to generate or support hypotheses because it contains factors that can never be verified. The first term, R*, number of stars, is generally constrained within a few orders of magnitude. The second and third terms, fp, stars with planets and fe, planets with habitable conditions, are being evaluated for the star's neighborhood. Drake originally formulated the equation merely as an agenda for discussion at the Green Bank conference, but some applications of the formula had been taken literally and related to simplistic or pseudoscientific arguments. Another associated topic is the Fermi paradox, which suggests that if intelligent life is common in the universe, then there should be obvious signs of it. Another active research area in astrobiology is planetary system formation. It has been suggested that the peculiarities of the Solar System (for example, the presence of Jupiter as a protective shield) may have greatly increased the probability of intelligent life arising on Earth. Biology Biology cannot state that a process or phenomenon, by being mathematically possible, has to exist forcibly in an extraterrestrial body. Biologists specify what is speculative and what is not. The discovery of extremophiles, organisms able to survive in extreme environments, became a core research element for astrobiologists, as they are important to understand four areas in the limits of life in planetary context: the potential for panspermia, forward contamination due to human exploration ventures, planetary colonization by humans, and the exploration of extinct and extant extraterrestrial life. Until the 1970s, life was thought to be entirely dependent on energy from the Sun. Plants on Earth's surface capture energy from sunlight to photosynthesize sugars from carbon dioxide and water, releasing oxygen in the process that is then consumed by oxygen-respiring organisms, passing their energy up the food chain. Even life in the ocean depths, where sunlight cannot reach, was thought to obtain its nourishment either from consuming organic detritus rained down from the surface waters or from eating animals that did. The world's ability to support life was thought to depend on its access to sunlight. However, in 1977, during an exploratory dive to the Galapagos Rift in the deep-sea exploration submersible Alvin, scientists discovered colonies of giant tube worms, clams, crustaceans, mussels, and other assorted creatures clustered around undersea volcanic features known as black smokers. These creatures thrive despite having no access to sunlight, and it was soon discovered that they form an entirely independent ecosystem. Although most of these multicellular lifeforms need dissolved oxygen (produced by oxygenic photosynthesis) for their aerobic cellular respiration and thus are not completely independent from sunlight by themselves, the basis for their food chain is a form of bacterium that derives its energy from oxidization of reactive chemicals, such as hydrogen or hydrogen sulfide, that bubble up from the Earth's interior. Other lifeforms entirely decoupled from the energy from sunlight are green sulfur bacteria which are capturing geothermal light for anoxygenic photosynthesis or bacteria running chemolithoautotrophy based on the radioactive decay of uranium. This chemosynthesis revolutionized the study of biology and astrobiology by revealing that life need not be sunlight-dependent; it only requires water and an energy gradient in order to exist. Biologists have found extremophiles that thrive in ice, boiling water, acid, alkali, the water core of nuclear reactors, salt crystals, toxic waste and in a range of other extreme habitats that were previously thought to be inhospitable for life. This opened up a new avenue in astrobiology by massively expanding the number of possible extraterrestrial habitats. Characterization of these organisms, their environments and their evolutionary pathways, is considered a crucial component to understanding how life might evolve elsewhere in the universe. For example, some organisms able to withstand exposure to the vacuum and radiation of outer space include the lichen fungi Rhizocarpon geographicum and Rusavskia elegans, the bacterium Bacillus safensis, Deinococcus radiodurans, Bacillus subtilis, yeast Saccharomyces cerevisiae, seeds from Arabidopsis thaliana ('mouse-ear cress'), as well as the invertebrate animal Tardigrade. While tardigrades are not considered true extremophiles, they are considered extremotolerant microorganisms that have contributed to the field of astrobiology. Their extreme radiation tolerance and presence of DNA protection proteins may provide answers as to whether life can survive away from the protection of the Earth's atmosphere. Jupiter's moon, Europa, and Saturn's moon, Enceladus, are now considered the most likely locations for extant extraterrestrial life in the Solar System due to their subsurface water oceans where radiogenic and tidal heating enables liquid water to exist. The origin of life, known as abiogenesis, distinct from the evolution of life, is another ongoing field of research. Oparin and Haldane postulated that the conditions on the early Earth were conducive to the formation of organic compounds from inorganic elements and thus to the formation of many of the chemicals common to all forms of life we see today. The study of this process, known as prebiotic chemistry, has made some progress, but it is still unclear whether or not life could have formed in such a manner on Earth. The alternative hypothesis of panspermia is that the first elements of life may have formed on another planet with even more favorable conditions (or even in interstellar space, asteroids, etc.) and then have been carried over to Earth. The cosmic dust permeating the universe contains complex organic compounds ("amorphous organic solids with a mixed aromatic-aliphatic structure") that could be created naturally, and rapidly, by stars. Further, a scientist suggested that these compounds may have been related to the development of life on Earth and said that, "If this is the case, life on Earth may have had an easier time getting started as these organics can serve as basic ingredients for life." More than 20% of the carbon in the universe may be associated with polycyclic aromatic hydrocarbons (PAHs), possible starting materials for the formation of life. PAHs seem to have been formed shortly after the Big Bang, are widespread throughout the universe, and are associated with new stars and exoplanets. PAHs are subjected to interstellar medium conditions and are transformed through hydrogenation, oxygenation and hydroxylation, to more complex organics—"a step along the path toward amino acids and nucleotides, the raw materials of proteins and DNA, respectively". In October 2020, astronomers proposed the idea of detecting life on distant planets by studying the shadows of trees at certain times of the day to find patterns that could be detected through observation of exoplanets. Philosophy David Grinspoon called astrobiology a field of natural philosophy. Astrobiology intersects with philosophy by raising questions about the nature and existence of life beyond Earth. Philosophical implications include the definition of life itself, issues in the philosophy of mind and cognitive science in case intelligent life is found, epistemological questions about the nature of proof, ethical considerations of space exploration, along with the broader impact of discovering extraterrestrial life on human thought and society. Dunér has emphasized philosophy of astrobiology as an ongoing existential exercise in individual and collective self-understanding, whose major task is constructing and debating concepts such as the concept of life. Key issues, for Dunér, are questions of resource money and monetary planning, epistemological questions regarding astrobiological knowledge, linguistics issues about interstellar communication, cognitive issues such as the definition of intelligence, along with the possibility of interplanetary contamination. Persson also emphasized key philosophical questions in astrobiology. They include ethical justification of resources, the question of life in general, the epistemological issues and knowledge about being alone in the universe, ethics towards extraterrestrial life, the question of politics and governing uninhabited worlds, along with questions of ecology. For von Hegner, the question of astrobiology and the possibility of astrophilosophy differs. For him, the discipline needs to bifurcate into astrobiology and astrophilosophy since discussions made possible by astrobiology, but which have been astrophilosophical in nature, have existed as long as there have been discussions about extraterrestrial life. Astrobiology is a self-corrective interaction among observation, hypothesis, experiment, and theory, pertaining to the exploration of all natural phenomena. Astrophilosophy consists of methods of dialectic analysis and logical argumentation, pertaining to the clarification of the nature of reality. Šekrst argues that astrobiology requires the affirmation of astrophilosophy, but not as a separate cognate to astrobiology. The stance of conceptual speciesm, according to Šekrst, permeates astrobiology since the very name astrobiology tries to talk about not just biology, but about life in a general way, which includes terrestrial life as a subset. This leads us to either redefine philosophy, or consider the need for astrophilosophy as a more general discipline, to which philosophy is just a subset that deals with questions such as the nature of the human mind and other anthropocentric questions. Most of the philosophy of astrobiology deals with two main questions: the question of life and the ethics of space exploration. Kolb specifically emphasizes the question of viruses, for which the question whether they are alive or not is based on the definitions of life that include self-replication. Schneider tried to defined exo-life, but concluded that we often start with our own prejudices and that defining extraterrestrial life seems futile using human concepts. For Dick, astrobiology relies on metaphysical assumption that there is extraterrestrial life, which reaffirms questions in the philosophy of cosmology, such as fine-tuning or the anthropic principle. Rare Earth hypothesis The Rare Earth hypothesis postulates that multicellular life forms found on Earth may actually be more of a rarity than scientists assume. According to this hypothesis, life on Earth (and more, multi-cellular life) is possible because of a conjunction of the right circumstances (galaxy and location within it, planetary system, star, orbit, planetary size, atmosphere, etc.); and the chance for all those circumstances to repeat elsewhere may be rare. It provides a possible answer to the Fermi paradox which suggests, "If extraterrestrial aliens are common, why aren't they obvious?" It is apparently in opposition to the principle of mediocrity, assumed by famed astronomers Frank Drake, Carl Sagan, and others. The principle of mediocrity suggests that life on Earth is not exceptional, and it is more than likely to be found on innumerable other worlds. Missions Research into the environmental limits of life and the workings of extreme ecosystems is ongoing, enabling researchers to better predict what planetary environments might be most likely to harbor life. Missions such as the Phoenix lander, Mars Science Laboratory, ExoMars, Mars 2020 rover to Mars, and the Cassini probe to Saturn's moons aim to further explore the possibilities of life on other planets in the Solar System. Viking program The two Viking landers each carried four types of biological experiments to the surface of Mars in the late 1970s. These were the only Mars landers to carry out experiments looking specifically for metabolism by current microbial life on Mars. The landers used a robotic arm to collect soil samples into sealed test containers on the craft. The two landers were identical, so the same tests were carried out at two places on Mars' surface; Viking 1 near the equator and Viking 2 further north. The result was inconclusive, and is still disputed by some scientists. Norman Horowitz was the chief of the Jet Propulsion Laboratory bioscience section for the Mariner and Viking missions from 1965 to 1976. Horowitz considered that the great versatility of the carbon atom makes it the element most likely to provide solutions, even exotic solutions, to the problems of survival of life on other planets. However, he also considered that the conditions found on Mars were incompatible with carbon based life. Beagle 2 Beagle 2 was an unsuccessful British Mars lander that formed part of the European Space Agency's 2003 Mars Express mission. Its primary purpose was to search for signs of life on Mars, past or present. Although it landed safely, it was unable to correctly deploy its solar panels and telecom antenna. EXPOSE EXPOSE is a multi-user facility mounted in 2008 outside the International Space Station dedicated to astrobiology. EXPOSE was developed by the European Space Agency (ESA) for long-term spaceflights that allow exposure of organic chemicals and biological samples to outer space in low Earth orbit. Mars Science Laboratory The Mars Science Laboratory (MSL) mission landed the Curiosity rover that is currently in operation on Mars. It was launched 26 November 2011, and landed at Gale Crater on 6 August 2012. Mission objectives are to help assess Mars' habitability and in doing so, determine whether Mars is or has ever been able to support life, collect data for a future human mission, study Martian geology, its climate, and further assess the role that water, an essential ingredient for life as we know it, played in forming minerals on Mars. Tanpopo The Tanpopo mission is an orbital astrobiology experiment investigating the potential interplanetary transfer of life, organic compounds, and possible terrestrial particles in the low Earth orbit. The purpose is to assess the panspermia hypothesis and the possibility of natural interplanetary transport of microbial life as well as prebiotic organic compounds. Early mission results show evidence that some clumps of microorganism can survive for at least one year in space. This may support the idea that clumps greater than 0.5 millimeters of microorganisms could be one way for life to spread from planet to planet. ExoMars rover ExoMars is a robotic mission to Mars to search for possible biosignatures of Martian life, past or present. This astrobiological mission was under development by the European Space Agency (ESA) in partnership with the Russian Federal Space Agency (Roscosmos); it was planned for a 2022 launch; however, technical and funding issues and the Russian invasion of Ukraine have forced ESA to repeatedly delay the rover's delivery to 2028. Mars 2020 Mars 2020 successfully landed its rover Perseverance in Jezero Crater on 18 February 2021. It will investigate environments on Mars relevant to astrobiology, investigate its surface geological processes and history, including the assessment of its past habitability and potential for preservation of biosignatures and biomolecules within accessible geological materials. The Science Definition Team is proposing the rover collect and package at least 31 samples of rock cores and soil for a later mission to bring back for more definitive analysis in laboratories on Earth. The rover could make measurements and technology demonstrations to help designers of a human expedition understand any hazards posed by Martian dust and demonstrate how to collect carbon dioxide (CO2), which could be a resource for making molecular oxygen (O2) and rocket fuel. Europa Clipper Europa Clipper is a mission launched by NASA on 14 October 2024 that will conduct detailed reconnaissance of Jupiter's moon Europa beginning in 2030, and will investigate whether its internal ocean could harbor conditions suitable for life. It will also aid in the selection of future landing sites. Dragonfly Dragonfly is a NASA mission scheduled to land on Titan in 2036 to assess its microbial habitability and study its prebiotic chemistry. Dragonfly is a rotorcraft lander that will perform controlled flights between multiple locations on the surface, which allows sampling of diverse regions and geological contexts. Proposed concepts Icebreaker Life Icebreaker Life is a lander mission that was proposed for NASA's Discovery Program for the 2021 launch opportunity, but it was not selected for development. It would have had a stationary lander that would be a near copy of the successful 2008 Phoenix and it would have carried an upgraded astrobiology scientific payload, including a 1-meter-long core drill to sample ice-cemented ground in the northern plains to conduct a search for organic molecules and evidence of current or past life on Mars. One of the key goals of the Icebreaker Life mission is to test the hypothesis that the ice-rich ground in the polar regions has significant concentrations of organics due to protection by the ice from oxidants and radiation. Journey to Enceladus and Titan Journey to Enceladus and Titan (JET) is an astrobiology mission concept to assess the habitability potential of Saturn's moons Enceladus and Titan by means of an orbiter. Enceladus Life Finder Enceladus Life Finder (ELF) is a proposed astrobiology mission concept for a space probe intended to assess the habitability of the internal aquatic ocean of Enceladus, Saturn's sixth-largest moon. Life Investigation For Enceladus Life Investigation For Enceladus (LIFE) is a proposed astrobiology sample-return mission concept. The spacecraft would enter into Saturn orbit and enable multiple flybys through Enceladus' icy plumes to collect icy plume particles and volatiles and return them to Earth on a capsule. The spacecraft may sample Enceladus' plumes, the E ring of Saturn, and the upper atmosphere of Titan. Oceanus Oceanus is an orbiter proposed in 2017 for the New Frontiers mission No. 4. It would travel to the moon of Saturn, Titan, to assess its habitability. Oceanus objectives are to reveal Titan's organic chemistry, geology, gravity, topography, collect 3D reconnaissance data, catalog the organics and determine where they may interact with liquid water. Explorer of Enceladus and Titan Explorer of Enceladus and Titan (E2T) is an orbiter mission concept that would investigate the evolution and habitability of the Saturnian satellites Enceladus and Titan. The mission concept was proposed in 2017 by the European Space Agency. See also The Living Cosmos Citations General references The International Journal of Astrobiology , published by Cambridge University Press, is the forum for practitioners in this interdisciplinary field. Astrobiology, published by Mary Ann Liebert, Inc., is a peer-reviewed journal that explores the origins of life, evolution, distribution, and destiny in the universe. Loeb, Avi (2021). Extraterrestrial: The First Sign of Intelligent Life Beyond Earth. Houghton Mifflin Harcourt. Further reading D. Goldsmith, T. Owen, The Search For Life in the Universe, Addison-Wesley Publishing Company, 2001 (3rd edition). Andy Weir's 2021 novel, Project Hail Mary, centers on astrobiology. External links Astrobiology.nasa.gov UK Centre for Astrobiology Spanish Centro de Astrobiología Astrobiology Research at The Library of Congress Astrobiology Survey – An introductory course on astrobiology Summary - Search For Life Beyond Earth (NASA; 25 June 2021) Origin of life Astronomical sub-disciplines Branches of biology Speculative evolution
Astrobiology
[ "Astronomy", "Biology" ]
8,184
[ "Origin of life", "Hypothetical life forms", "Speculative evolution", "Astrobiology", "nan", "Biological hypotheses", "Astronomical sub-disciplines" ]
2,792
https://en.wikipedia.org/wiki/Anthropic%20principle
The anthropic principle, also known as the observation selection effect, is the proposition that the range of possible observations that could be made about the universe is limited by the fact that observations are only possible in the type of universe that is capable of developing intelligent life. Proponents of the anthropic principle argue that it explains why the universe has the age and the fundamental physical constants necessary to accommodate intelligent life. If either had been significantly different, no one would have been around to make observations. Anthropic reasoning has been used to address the question as to why certain measured physical constants take the values that they do, rather than some other arbitrary values, and to explain a perception that the universe appears to be finely tuned for the existence of life. There are many different formulations of the anthropic principle. Philosopher Nick Bostrom counts thirty, but the underlying principles can be divided into "weak" and "strong" forms, depending on the types of cosmological claims they entail. Definition and basis The principle was formulated as a response to a series of observations that the laws of nature and parameters of the universe have values that are consistent with conditions for life as it is known rather than values that would not be consistent with life on Earth. The anthropic principle states that this is an a posteriori necessity, because if life were impossible, no living entity would be there to observe it, and thus it would not be known. That is, it must be possible to observe some universe, and hence, the laws and constants of any such universe must accommodate that possibility. The term anthropic in "anthropic principle" has been argued to be a misnomer. While singling out the currently observable kind of carbon-based life, none of the finely tuned phenomena require human life or some kind of carbon chauvinism. Any form of life or any form of heavy atom, stone, star, or galaxy would do; nothing specifically human or anthropic is involved. The anthropic principle has given rise to some confusion and controversy, partly because the phrase has been applied to several distinct ideas. All versions of the principle have been accused of discouraging the search for a deeper physical understanding of the universe. Critics of the weak anthropic principle point out that its lack of falsifiability entails that it is non-scientific and therefore inherently not useful. Stronger variants of the anthropic principle which are not tautologies can still make claims considered controversial by some; these would be contingent upon empirical verification. Anthropic observations In 1961, Robert Dicke noted that the age of the universe, as seen by living observers, cannot be random. Instead, biological factors constrain the universe to be more or less in a "golden age", neither too young nor too old. If the universe was one tenth as old as its present age, there would not have been sufficient time to build up appreciable levels of metallicity (levels of elements besides hydrogen and helium) especially carbon, by nucleosynthesis. Small rocky planets did not yet exist. If the universe were 10 times older than it actually is, most stars would be too old to remain on the main sequence and would have turned into white dwarfs, aside from the dimmest red dwarfs, and stable planetary systems would have already come to an end. Thus, Dicke explained the coincidence between large dimensionless numbers constructed from the constants of physics and the age of the universe, a coincidence that inspired Dirac's varying-G theory. Dicke later reasoned that the density of matter in the universe must be almost exactly the critical density needed to prevent the Big Crunch (the "Dicke coincidences" argument). The most recent measurements may suggest that the observed density of baryonic matter, and some theoretical predictions of the amount of dark matter, account for about 30% of this critical density, with the rest contributed by a cosmological constant. Steven Weinberg gave an anthropic explanation for this fact: he noted that the cosmological constant has a remarkably low value, some 120 orders of magnitude smaller than the value particle physics predicts (this has been described as the "worst prediction in physics"). However, if the cosmological constant were only several orders of magnitude larger than its observed value, the universe would suffer catastrophic inflation, which would preclude the formation of stars, and hence life. The observed values of the dimensionless physical constants (such as the fine-structure constant) governing the four fundamental interactions are balanced as if fine-tuned to permit the formation of commonly found matter and subsequently the emergence of life. A slight increase in the strong interaction (up to 50% for some authors) would bind the dineutron and the diproton and convert all hydrogen in the early universe to helium; likewise, an increase in the weak interaction also would convert all hydrogen to helium. Water, as well as sufficiently long-lived stable stars, both essential for the emergence of life as it is known, would not exist. More generally, small changes in the relative strengths of the four fundamental interactions can greatly affect the universe's age, structure, and capacity for life. Origin The phrase "anthropic principle" first appeared in Brandon Carter's contribution to a 1973 Kraków symposium honouring Copernicus's 500th birthday. Carter, a theoretical astrophysicist, articulated the Anthropic Principle in reaction to the Copernican Principle, which states that humans do not occupy a privileged position in the Universe. Carter said: "Although our situation is not necessarily central, it is inevitably privileged to some extent." Specifically, Carter disagreed with using the Copernican principle to justify the Perfect Cosmological Principle, which states that all large regions and times in the universe must be statistically identical. The latter principle underlies the steady-state theory, which had recently been falsified by the 1965 discovery of the cosmic microwave background radiation. This discovery was unequivocal evidence that the universe has changed radically over time (for example, via the Big Bang). Carter defined two forms of the anthropic principle, a "weak" one which referred only to anthropic selection of privileged spacetime locations in the universe, and a more controversial "strong" form that addressed the values of the fundamental constants of physics. Roger Penrose explained the weak form as follows: One reason this is plausible is that there are many other places and times in which humans could have evolved. But when applying the strong principle, there is only one universe, with one set of fundamental parameters, so what exactly is the point being made? Carter offers two possibilities: First, humans can use their own existence to make "predictions" about the parameters. But second, "as a last resort", humans can convert these predictions into explanations by assuming that there is more than one universe, in fact a large and possibly infinite collection of universes, something that is now called the multiverse ("world ensemble" was Carter's term), in which the parameters (and perhaps the laws of physics) vary across universes. The strong principle then becomes an example of a selection effect, exactly analogous to the weak principle. Postulating a multiverse is certainly a radical step, but taking it could provide at least a partial answer to a question seemingly out of the reach of normal science: "Why do the fundamental laws of physics take the particular form we observe and not another?" Since Carter's 1973 paper, the term anthropic principle has been extended to cover a number of ideas that differ in important ways from his. Particular confusion was caused by the 1986 book The Anthropic Cosmological Principle by John D. Barrow and Frank Tipler, which distinguished between a "weak" and "strong" anthropic principle in a way very different from Carter's, as discussed in the next section. Carter was not the first to invoke some form of the anthropic principle. In fact, the evolutionary biologist Alfred Russel Wallace anticipated the anthropic principle as long ago as 1904: "Such a vast and complex universe as that which we know exists around us, may have been absolutely required [...] in order to produce a world that should be precisely adapted in every detail for the orderly development of life culminating in man." In 1957, Robert Dicke wrote: "The age of the Universe 'now' is not random but conditioned by biological factors [...] [changes in the values of the fundamental constants of physics] would preclude the existence of man to consider the problem." Ludwig Boltzmann may have been one of the first in modern science to use anthropic reasoning. Prior to knowledge of the Big Bang Boltzmann's thermodynamic concepts painted a picture of a universe that had inexplicably low entropy. Boltzmann suggested several explanations, one of which relied on fluctuations that could produce pockets of low entropy or Boltzmann universes. While most of the universe is featureless in this model, to Boltzmann, it is unremarkable that humanity happens to inhabit a Boltzmann universe, as that is the only place where intelligent life could be. Variants Weak anthropic principle (WAP) (Carter): "... our location in the universe is necessarily privileged to the extent of being compatible with our existence as observers." For Carter, "location" refers to our location in time as well as space. Strong anthropic principle (SAP) (Carter): "[T]he universe (and hence the fundamental parameters on which it depends) must be such as to admit the creation of observers within it at some stage. To paraphrase Descartes, cogito ergo mundus talis est."The Latin tag ("I think, therefore the world is such [as it is]") makes it clear that "must" indicates a deduction from the fact of our existence; the statement is thus a truism. In their 1986 book, The anthropic cosmological principle, John Barrow and Frank Tipler depart from Carter and define the WAP and SAP as follows: Weak anthropic principle (WAP) (Barrow and Tipler): "The observed values of all physical and cosmological quantities are not equally probable but they take on values restricted by the requirement that there exist sites where carbon-based life can evolve and by the requirements that the universe be old enough for it to have already done so."Unlike Carter they restrict the principle to carbon-based life, rather than just "observers". A more important difference is that they apply the WAP to the fundamental physical constants, such as the fine-structure constant, the number of spacetime dimensions, and the cosmological constant—topics that fall under Carter's SAP. Strong anthropic principle (SAP) (Barrow and Tipler): "The Universe must have those properties which allow life to develop within it at some stage in its history."This looks very similar to Carter's SAP, but unlike the case with Carter's SAP, the "must" is an imperative, as shown by the following three possible elaborations of the SAP, each proposed by Barrow and Tipler: "There exists one possible Universe 'designed' with the goal of generating and sustaining 'observers'." This can be seen as simply the classic design argument restated in the garb of contemporary cosmology. It implies that the purpose of the universe is to give rise to intelligent life, with the laws of nature and their fundamental physical constants set to ensure that life emerges and evolves. "Observers are necessary to bring the Universe into being." Barrow and Tipler believe that this is a valid conclusion from quantum mechanics, as John Archibald Wheeler has suggested, especially via his idea that information is the fundamental reality (see It from bit) and his Participatory anthropic principle (PAP) which is an interpretation of quantum mechanics associated with the ideas of John von Neumann and Eugene Wigner. "An ensemble of other different universes is necessary for the existence of our Universe." By contrast, Carter merely says that an ensemble of universes is necessary for the SAP to count as an explanation. The philosophers John Leslie and Nick Bostrom reject the Barrow and Tipler SAP as a fundamental misreading of Carter. For Bostrom, Carter's anthropic principle just warns us to make allowance for anthropic bias—that is, the bias created by anthropic selection effects (which Bostrom calls "observation" selection effects)—the necessity for observers to exist in order to get a result. He writes: Strong self-sampling assumption (SSSA) (Bostrom): "Each observer-moment should reason as if it were randomly selected from the class of all observer-moments in its reference class." Analysing an observer's experience into a sequence of "observer-moments" helps avoid certain paradoxes; but the main ambiguity is the selection of the appropriate "reference class": for Carter's WAP this might correspond to all real or potential observer-moments in our universe; for the SAP, to all in the multiverse. Bostrom's mathematical development shows that choosing either too broad or too narrow a reference class leads to counter-intuitive results, but he is not able to prescribe an ideal choice. According to Jürgen Schmidhuber, the anthropic principle essentially just says that the conditional probability of finding yourself in a universe compatible with your existence is always 1. It does not allow for any additional nontrivial predictions such as "gravity won't change tomorrow". To gain more predictive power, additional assumptions on the prior distribution of alternative universes are necessary. Playwright and novelist Michael Frayn describes a form of the strong anthropic principle in his 2006 book The Human Touch, which explores what he characterises as "the central oddity of the Universe": Character of anthropic reasoning Carter chose to focus on a tautological aspect of his ideas, which has resulted in much confusion. In fact, anthropic reasoning interests scientists because of something that is only implicit in the above formal definitions, namely that humans should give serious consideration to there being other universes with different values of the "fundamental parameters"—that is, the dimensionless physical constants and initial conditions for the Big Bang. Carter and others have argued that life would not be possible in most such universes. In other words, the universe humans live in is fine tuned to permit life. Collins & Hawking (1973) characterized Carter's then-unpublished big idea as the postulate that "there is not one universe but a whole infinite ensemble of universes with all possible initial conditions". If this is granted, the anthropic principle provides a plausible explanation for the fine tuning of our universe: the "typical" universe is not fine-tuned, but given enough universes, a small fraction will be capable of supporting intelligent life. Ours must be one of these, and so the observed fine tuning should be no cause for wonder. Although philosophers have discussed related concepts for centuries, in the early 1970s the only genuine physical theory yielding a multiverse of sorts was the many-worlds interpretation of quantum mechanics. This would allow variation in initial conditions, but not in the truly fundamental constants. Since that time a number of mechanisms for producing a multiverse have been suggested: see the review by Max Tegmark. An important development in the 1980s was the combination of inflation theory with the hypothesis that some parameters are determined by symmetry breaking in the early universe, which allows parameters previously thought of as "fundamental constants" to vary over very large distances, thus eroding the distinction between Carter's weak and strong principles. At the beginning of the 21st century, the string landscape emerged as a mechanism for varying essentially all the constants, including the number of spatial dimensions. The anthropic idea that fundamental parameters are selected from a multitude of different possibilities (each actual in some universe or other) contrasts with the traditional hope of physicists for a theory of everything having no free parameters. As Albert Einstein said: "What really interests me is whether God had any choice in the creation of the world." In 2002, some proponents of the leading candidate for a "theory of everything", string theory, proclaimed "the end of the anthropic principle" since there would be no free parameters to select. In 2003, however, Leonard Susskind stated: "... it seems plausible that the landscape is unimaginably large and diverse. This is the behavior that gives credence to the anthropic principle." The modern form of a design argument is put forth by intelligent design. Proponents of intelligent design often cite the fine-tuning observations that (in part) preceded the formulation of the anthropic principle by Carter as a proof of an intelligent designer. Opponents of intelligent design are not limited to those who hypothesize that other universes exist; they may also argue, anti-anthropically, that the universe is less fine-tuned than often claimed, or that accepting fine tuning as a brute fact is less astonishing than the idea of an intelligent creator. Furthermore, even accepting fine tuning, Sober (2005) and Ikeda and Jefferys, argue that the anthropic principle as conventionally stated actually undermines intelligent design. Paul Davies's book The Goldilocks Enigma (2006) reviews the current state of the fine-tuning debate in detail, and concludes by enumerating the following responses to that debate: The absurd universe: Our universe just happens to be the way it is. The unique universe: There is a deep underlying unity in physics that necessitates the Universe being the way it is. A Theory of Everything will explain why the various features of the Universe must have exactly the values that have been recorded. The multiverse: Multiple universes exist, having all possible combinations of characteristics, and humans inevitably find themselves within a universe that allows us to exist. Intelligent design: A creator designed the Universe with the purpose of supporting complexity and the emergence of intelligence. The life principle: There is an underlying principle that constrains the Universe to evolve towards life and mind. The self-explaining universe: A closed explanatory or causal loop: "perhaps only universes with a capacity for consciousness can exist". This is Wheeler's participatory anthropic principle (PAP). The fake universe: Humans live inside a virtual reality simulation. Omitted here is Lee Smolin's model of cosmological natural selection, also known as fecund universes, which proposes that universes have "offspring" that are more plentiful if they resemble our universe. Also see Gardner (2005). Clearly each of these hypotheses resolve some aspects of the puzzle, while leaving others unanswered. Followers of Carter would admit only option 3 as an anthropic explanation, whereas 3 through 6 are covered by different versions of Barrow and Tipler's SAP (which would also include 7 if it is considered a variant of 4, as in Tipler 1994). The anthropic principle, at least as Carter conceived it, can be applied on scales much smaller than the whole universe. For example, Carter (1983) inverted the usual line of reasoning and pointed out that when interpreting the evolutionary record, one must take into account cosmological and astrophysical considerations. With this in mind, Carter concluded that given the best estimates of the age of the universe, the evolutionary chain culminating in Homo sapiens probably admits only one or two low probability links. Observational evidence No possible observational evidence bears on Carter's WAP, as it is merely advice to the scientist and asserts nothing debatable. The obvious test of Barrow's SAP, which says that the universe is "required" to support life, is to find evidence of life in universes other than ours. Any other universe is, by most definitions, unobservable (otherwise it would be included in our portion of this universe). Thus, in principle Barrow's SAP cannot be falsified by observing a universe in which an observer cannot exist. Philosopher John Leslie states that the Carter SAP (with multiverse) predicts the following: Physical theory will evolve so as to strengthen the hypothesis that early phase transitions occur probabilistically rather than deterministically, in which case there will be no deep physical reason for the values of fundamental constants; Various theories for generating multiple universes will prove robust; Evidence that the universe is fine tuned will continue to accumulate; No life with a non-carbon chemistry will be discovered; Mathematical studies of galaxy formation will confirm that it is sensitive to the rate of expansion of the universe. Hogan has emphasised that it would be very strange if all fundamental constants were strictly determined, since this would leave us with no ready explanation for apparent fine tuning. In fact, humans might have to resort to something akin to Barrow and Tipler's SAP: there would be no option for such a universe not to support life. Probabilistic predictions of parameter values can be made given: a particular multiverse with a "measure", i.e. a well defined "density of universes" (so, for parameter X, one can calculate the prior probability P(X0) dX that X is in the range ), and an estimate of the number of observers in each universe, N(X) (e.g., this might be taken as proportional to the number of stars in the universe). The probability of observing value X is then proportional to . A generic feature of an analysis of this nature is that the expected values of the fundamental physical constants should not be "over-tuned", i.e. if there is some perfectly tuned predicted value (e.g. zero), the observed value need be no closer to that predicted value than what is required to make life possible. The small but finite value of the cosmological constant can be regarded as a successful prediction in this sense. One thing that would not count as evidence for the anthropic principle is evidence that the Earth or the Solar System occupied a privileged position in the universe, in violation of the Copernican principle (for possible counterevidence to this principle, see Copernican principle), unless there was some reason to think that that position was a necessary condition for our existence as observers. Applications of the principle The nucleosynthesis of carbon-12 Fred Hoyle may have invoked anthropic reasoning to predict an astrophysical phenomenon. He is said to have reasoned, from the prevalence on Earth of life forms whose chemistry was based on carbon-12 nuclei, that there must be an undiscovered resonance in the carbon-12 nucleus facilitating its synthesis in stellar interiors via the triple-alpha process. He then calculated the energy of this undiscovered resonance to be 7.6 million electronvolts. Willie Fowler's research group soon found this resonance, and its measured energy was close to Hoyle's prediction. However, in 2010 Helge Kragh argued that Hoyle did not use anthropic reasoning in making his prediction, since he made his prediction in 1953 and anthropic reasoning did not come into prominence until 1980. He called this an "anthropic myth", saying that Hoyle and others made an after-the-fact connection between carbon and life decades after the discovery of the resonance. Cosmic inflation Don Page criticized the entire theory of cosmic inflation as follows. He emphasized that initial conditions that made possible a thermodynamic arrow of time in a universe with a Big Bang origin, must include the assumption that at the initial singularity, the entropy of the universe was low and therefore extremely improbable. Paul Davies rebutted this criticism by invoking an inflationary version of the anthropic principle. While Davies accepted the premise that the initial state of the visible universe (which filled a microscopic amount of space before inflating) had to possess a very low entropy value—due to random quantum fluctuations—to account for the observed thermodynamic arrow of time, he deemed this fact an advantage for the theory. That the tiny patch of space from which our observable universe grew had to be extremely orderly, to allow the post-inflation universe to have an arrow of time, makes it unnecessary to adopt any "ad hoc" hypotheses about the initial entropy state, hypotheses other Big Bang theories require. String theory String theory predicts a large number of possible universes, called the "backgrounds" or "vacua". The set of these vacua is often called the "multiverse" or "anthropic landscape" or "string landscape". Leonard Susskind has argued that the existence of a large number of vacua puts anthropic reasoning on firm ground: only universes whose properties are such as to allow observers to exist are observed, while a possibly much larger set of universes lacking such properties go unnoticed. Steven Weinberg believes the anthropic principle may be appropriated by cosmologists committed to nontheism, and refers to that principle as a "turning point" in modern science because applying it to the string landscape "may explain how the constants of nature that we observe can take values suitable for life without being fine-tuned by a benevolent creator". Others—most notably David Gross but also Luboš Motl, Peter Woit, and Lee Smolin—argue that this is not predictive. Max Tegmark, Mario Livio, and Martin Rees argue that only some aspects of a physical theory need be observable and/or testable for the theory to be accepted, and that many well-accepted theories are far from completely testable at present. Jürgen Schmidhuber (2000–2002) points out that Ray Solomonoff's theory of universal inductive inference and its extensions already provide a framework for maximizing our confidence in any theory, given a limited sequence of physical observations, and some prior distribution on the set of possible explanations of the universe. Zhi-Wei Wang and Samuel L. Braunstein proved that life's existence in the universe depends on various fundamental constants. It suggests that without a complete understanding of these constants, one might incorrectly perceive the universe as being intelligently designed for life. This perspective challenges the view that our universe is unique in its ability to support life. Dimensions of spacetime There are two kinds of dimensions: spatial (bidirectional) and temporal (unidirectional). Let the number of spatial dimensions be N and the number of temporal dimensions be T. That and , setting aside the compactified dimensions invoked by string theory and undetectable to date, can be explained by appealing to the physical consequences of letting N differ from 3 and T differ from 1. The argument is often of an anthropic character and possibly the first of its kind, albeit before the complete concept came into vogue. The implicit notion that the dimensionality of the universe is special is first attributed to Gottfried Wilhelm Leibniz, who in the Discourse on Metaphysics suggested that the world is "the one which is at the same time the simplest in hypothesis and the richest in phenomena". Immanuel Kant argued that 3-dimensional space was a consequence of the inverse square law of universal gravitation. While Kant's argument is historically important, John D. Barrow said that it "gets the punch-line back to front: it is the three-dimensionality of space that explains why we see inverse-square force laws in Nature, not vice-versa" (Barrow 2002:204). In 1920, Paul Ehrenfest showed that if there is only a single time dimension and more than three spatial dimensions, the orbit of a planet about its Sun cannot remain stable. The same is true of a star's orbit around the center of its galaxy. Ehrenfest also showed that if there are an even number of spatial dimensions, then the different parts of a wave impulse will travel at different speeds. If there are spatial dimensions, where k is a positive whole number, then wave impulses become distorted. In 1922, Hermann Weyl claimed that Maxwell's theory of electromagnetism can be expressed in terms of an action only for a four-dimensional manifold. Finally, Tangherlini showed in 1963 that when there are more than three spatial dimensions, electron orbitals around nuclei cannot be stable; electrons would either fall into the nucleus or disperse. Max Tegmark expands on the preceding argument in the following anthropic manner. If T differs from 1, the behavior of physical systems could not be predicted reliably from knowledge of the relevant partial differential equations. In such a universe, intelligent life capable of manipulating technology could not emerge. Moreover, if , Tegmark maintains that protons and electrons would be unstable and could decay into particles having greater mass than themselves. (This is not a problem if the particles have a sufficiently low temperature.) Lastly, if , gravitation of any kind becomes problematic, and the universe would probably be too simple to contain observers. For example, when , nerves cannot cross without intersecting. Hence anthropic and other arguments rule out all cases except and , which describes the world around us. On the other hand, in view of creating black holes from an ideal monatomic gas under its self-gravity, Wei-Xiang Feng showed that -dimensional spacetime is the marginal dimensionality. Moreover, it is the unique dimensionality that can afford a "stable" gas sphere with a "positive" cosmological constant. However, a self-gravitating gas cannot be stably bound if the mass sphere is larger than ~1021 solar masses, due to the small positivity of the cosmological constant observed. In 2019, James Scargill argued that complex life may be possible with two spatial dimensions. According to Scargill, a purely scalar theory of gravity may enable a local gravitational force, and 2D networks may be sufficient for complex neural networks. Metaphysical interpretations Some of the metaphysical disputes and speculations include, for example, attempts to back Pierre Teilhard de Chardin's earlier interpretation of the universe as being Christ centered (compare Omega Point), expressing a creatio evolutiva instead the elder notion of creatio continua. From a strictly secular, humanist perspective, it allows as well to put human beings back in the center, an anthropogenic shift in cosmology. Karl W. Giberson has laconically stated that William Sims Bainbridge disagreed with de Chardin's optimism about a future Omega point at the end of history, arguing that logically, humans are trapped at the Omicron point, in the middle of the Greek alphabet rather than advancing to the end, because the universe does not need to have any characteristics that would support our further technical progress, if the anthropic principle merely requires it to be suitable for our evolution to this point. The anthropic cosmological principle A thorough extant study of the anthropic principle is the book The anthropic cosmological principle by John D. Barrow, a cosmologist, and Frank J. Tipler, a cosmologist and mathematical physicist. This book sets out in detail the many known anthropic coincidences and constraints, including many found by its authors. While the book is primarily a work of theoretical astrophysics, it also touches on quantum physics, chemistry, and earth science. An entire chapter argues that Homo sapiens is, with high probability, the only intelligent species in the Milky Way. The book begins with an extensive review of many topics in the history of ideas the authors deem relevant to the anthropic principle, because the authors believe that principle has important antecedents in the notions of teleology and intelligent design. They discuss the writings of Fichte, Hegel, Bergson, and Alfred North Whitehead, and the Omega Point cosmology of Teilhard de Chardin. Barrow and Tipler carefully distinguish teleological reasoning from eutaxiological reasoning; the former asserts that order must have a consequent purpose; the latter asserts more modestly that order must have a planned cause. They attribute this important but nearly always overlooked distinction to an obscure 1883 book by L. E. Hicks. Seeing little sense in a principle requiring intelligent life to emerge while remaining indifferent to the possibility of its eventual extinction, Barrow and Tipler propose the final anthropic principle (FAP): Intelligent information-processing must come into existence in the universe, and, once it comes into existence, it will never die out. Barrow and Tipler submit that the FAP is both a valid physical statement and "closely connected with moral values". FAP places strong constraints on the structure of the universe, constraints developed further in Tipler's The Physics of Immortality. One such constraint is that the universe must end in a Big Crunch, which seems unlikely in view of the tentative conclusions drawn since 1998 about dark energy, based on observations of very distant supernovas. In his review of Barrow and Tipler, Martin Gardner ridiculed the FAP by quoting the last two sentences of their book as defining a completely ridiculous anthropic principle (CRAP): Reception and controversies Carter has frequently expressed regret for his own choice of the word "anthropic", because it conveys the misleading impression that the principle involves humans in particular, to the exclusion of non-human intelligence more broadly. Others have criticised the word "principle" as being too grandiose to describe straightforward applications of selection effects. A common criticism of Carter's SAP is that it is an easy deus ex machina that discourages searches for physical explanations. To quote Penrose again: "It tends to be invoked by theorists whenever they do not have a good enough theory to explain the observed facts." Carter's SAP and Barrow and Tipler's WAP have been dismissed as truisms or trivial tautologies—that is, statements true solely by virtue of their logical form and not because a substantive claim is made and supported by observation of reality. As such, they are criticized as an elaborate way of saying, "If things were different, they would be different", which is a valid statement, but does not make a claim of some factual alternative over another. Critics of the Barrow and Tipler SAP claim that it is neither testable nor falsifiable, and thus is not a scientific statement but rather a philosophical one. The same criticism has been leveled against the hypothesis of a multiverse, although some argue that it does make falsifiable predictions. A modified version of this criticism is that humanity understands so little about the emergence of life, especially intelligent life, that it is effectively impossible to calculate the number of observers in each universe. Also, the prior distribution of universes as a function of the fundamental constants is easily modified to get any desired result. Many criticisms focus on versions of the strong anthropic principle, such as Barrow and Tipler's anthropic cosmological principle, which are teleological notions that tend to describe the existence of life as a necessary prerequisite for the observable constants of physics. Similarly, Stephen Jay Gould, Michael Shermer, and others claim that the stronger versions of the anthropic principle seem to reverse known causes and effects. Gould compared the claim that the universe is fine-tuned for the benefit of our kind of life to saying that sausages were made long and narrow so that they could fit into modern hotdog buns, or saying that ships had been invented to house barnacles. These critics cite the vast physical, fossil, genetic, and other biological evidence consistent with life having been fine-tuned through natural selection to adapt to the physical and geophysical environment in which life exists. Life appears to have adapted to the universe, and not vice versa. Some applications of the anthropic principle have been criticized as an argument by lack of imagination, for tacitly assuming that carbon compounds and water are the only possible chemistry of life (sometimes called "carbon chauvinism"; see also alternative biochemistry). The range of fundamental physical constants consistent with the evolution of carbon-based life may also be wider than those who advocate a fine-tuned universe have argued. For instance, Harnik et al. propose a Weakless Universe in which the weak nuclear force is eliminated. They show that this has no significant effect on the other fundamental interactions, provided some adjustments are made in how those interactions work. However, if some of the fine-tuned details of our universe were violated, that would rule out complex structures of any kind—stars, planets, galaxies, etc. Lee Smolin has offered a theory designed to improve on the lack of imagination that has been ascribed to anthropic principles. He puts forth his fecund universes theory, which assumes universes have "offspring" through the creation of black holes whose offspring universes have values of physical constants that depend on those of the mother universe. The philosophers of cosmology John Earman, Ernan McMullin, and Jesús Mosterín contend that "in its weak version, the anthropic principle is a mere tautology, which does not allow us to explain anything or to predict anything that we did not already know. In its strong version, it is a gratuitous speculation". A further criticism by Mosterín concerns the flawed "anthropic" inference from the assumption of an infinity of worlds to the existence of one like ours: See also (discussing the anthropic principle) (an immediate precursor of the idea) (work of Alejandro Jenkins) Notes Footnotes References 5 chapters available online. Stenger, Victor J. (1999), "Anthropic design", The skeptical inquirer 23 (August 31, 1999): 40–43 Mosterín, Jesús (2005). "Anthropic explanations in cosmology". In P. Háyek, L. Valdés and D. Westerstahl (ed.), Logic, methodology and philosophy of science, Proceedings of the 12th international congress of the LMPS. London: King's college publications, pp. 441–473. . A simple anthropic argument for why there are 3 spatial and 1 temporal dimensions. Shows that some of the common criticisms of anthropic principle based on its relationship with numerology or the theological design argument are wrong. External links Nick Bostrom: web site devoted to the anthropic principle. Friederich, Simon. Fine-tuning, review article of the discussion about fine-tuning, highlighting the role of the anthropic principles. Gijsbers, Victor. (2000). Theistic anthropic principle refuted – Positive atheism magazine. Chown, Marcus, Anything Goes, New scientist, 6 June 1998. On Max Tegmark's work. Stephen Hawking, Steven Weinberg, Alexander Vilenkin, David Gross and Lawrence Krauss: Debate on anthropic reasoning Kavli-CERCA conference video archive. Sober, Elliott R. 2009, "Absence of evidence and evidence of absence – Evidential transitivity in connection with fossils, fishing, fine-tuning, and firing squads." Philosophical Studies, 2009, 143: 63–90. "Anthropic coincidence" – The anthropic controversy as a segue to Lee Smolin's theory of cosmological natural selection. Leonard Susskind and Lee Smolin debate the anthropic principle. Debate among scientists on arxiv.org. Evolutionary probability and fine tuning Benevolent design and the anthropic principle at MathPages Critical review of "The privileged planet" The anthropic principle – a review. Berger, Daniel, 2002, "An impertinent résumé of the Anthropic cosmological principle. " A critique of Barrow & Tipler. Jürgen Schmidhuber: Papers on algorithmic theories of everything and the anthropic principle's lack of predictive power. Paul Davies: Cosmic jackpot – Interview about the anthropic principle (starts at 40 min), 15 May 2007. Astronomical hypotheses Concepts in epistemology Physical cosmology Principles Religion and science
Anthropic principle
[ "Physics", "Astronomy" ]
8,402
[ "Astronomical hypotheses", "Astronomical sub-disciplines", "Philosophy of astronomy", "Theoretical physics", "Astrophysics", "Astronomical controversies", "Anthropic principle", "Physical cosmology" ]
2,815
https://en.wikipedia.org/wiki/Advanced%20Mobile%20Phone%20System
Advanced Mobile Phone System (AMPS) was an analog mobile phone system standard originally developed by Bell Labs and later modified in a cooperative effort between Bell Labs and Motorola. It was officially introduced in the Americas on October 13, 1983, and was deployed in many other countries too, including Israel in 1986, Australia in 1987, Singapore in 1988, and Pakistan in 1990. It was the primary analog mobile phone system in North America (and other locales) through the 1980s and into the 2000s. As of February 18, 2008, carriers in the United States were no longer required to support AMPS and companies such as AT&T and Verizon Communications have discontinued this service permanently. AMPS was discontinued in Australia in September 2000, in India by October 2004, in Israel by January 2010, and Brazil by 2010. History The first cellular network efforts began at Bell Labs and with research conducted at Motorola. In 1960, John F. Mitchell became Motorola's chief engineer for its mobile-communication products, and oversaw the development and marketing of the first pager to use transistors. Motorola had long produced mobile telephones for automobiles, but these large and heavy models consumed too much power to allow their use without the automobile's engine running. Mitchell's team, which included Dr. Martin Cooper, developed portable cellular telephony. Cooper and Mitchell were among the Motorola employees granted a patent for this work in 1973. The first call on the prototype connected, reportedly, to a wrong number. While Motorola was developing a cellular phone, from 1968 to 1983 Bell Labs worked out a system called Advanced Mobile Phone System (AMPS), which became the first cellular network standard in the United States. The Bell system deployed ASTM in Chicago, Illinois, first as an equipment test serving approximately 100 units in 1978, and subsequently as a service test planned for 2,000 billed units. Motorola and others designed and built the cellular phones for this and other cellular systems. Louis M. Weinberg, a marketing director at AT&T, was named the first president of the AMPS corporation. He served in this position during the startup of the AMPS subsidiary of AT&T. Martin Cooper, a former general manager for the systems division at Motorola, led a team that produced the first cellular handset in 1973 and made the first phone call from it. In 1983 Motorola introduced the DynaTAC 8000x, the first commercially available cellular phone small enough to be easily carried. He later introduced the so-called Bag Phone. In 1992, the first smartphone, called IBM Simon, used AMPS. Frank Canova led its design at IBM and it was demonstrated that year at the COMDEX computer-industry trade-show. A refined version of the product was marketed to consumers in 1994 by BellSouth under the name Simon Personal Communicator. The Simon was the first device that can be properly referred to as a "smartphone", even though that term was not yet coined. Technology AMPS is a first-generation cellular technology that uses separate frequencies, or "channels", for each conversation. It therefore required considerable bandwidth for a large number of users. In general terms, AMPS was very similar to the older "0G" Improved Mobile Telephone Service it replaced, but used considerably more computing power to select frequencies, hand off conversations to land lines, and handle billing and call setup. What really separated AMPS from older systems is the "back end" call setup functionality. In AMPS, the cell centers could flexibly assign channels to handsets based on signal strength, allowing the same frequency to be re-used, without interference, if locations were separated enough. The channels were grouped so a specific set was different of the one used on the cell nearby. This allowed a larger number of phones to be supported over a geographical area. AMPS pioneers coined the term "cellular" because of its use of small hexagonal "cells" within a system. AMPS suffered from many weaknesses compared to today's digital technologies. As an analog standard, it was susceptible to static and noise, and there was no protection from 'eavesdropping' using a scanner or an older TV set that could tune into channels 70–83. Cloning In the 1990s, an epidemic of "cloning" cost the cellular carriers millions of dollars. An eavesdropper with specialized equipment could intercept a handset's ESN (Electronic Serial Number) and MDN or CTN (Mobile Directory Number or Cellular Telephone Number). The Electronic Serial Number, a 12-digit number sent by the handset to the cellular system for billing purposes, uniquely identified that phone on the network. The system then allowed or disallowed calls and/or features based on its customer file. A person intercepting an ESN/MDN pair could clone the combination onto a different phone and use it in other areas for making calls without paying. Cellular phone cloning became possible with off-the-shelf technology in the 1990s. Would-be cloners required three key items : A radio receiver, such as the Icom PCR-1000, that could tune into the Reverse Channel (the frequency on which AMPS phones transmit data to the tower) A PC with a sound card and a software program called Banpaia A phone that could easily be used for cloning, such as the Oki 900 The radio, when tuned to the proper frequency, would receive the signal transmitted by the cell phone to be cloned, containing the phone's ESN/MDN pair. This signal would feed into the sound-card audio-input of the PC, and Banpaia would decode the ESN/MDN pair from this signal and display it on the screen. The hacker could then copy that data into the Oki 900 phone and reboot it, after which the phone network could not distinguish the Oki from the original phone whose signal had been received. This gave the cloner, through the Oki phone, the ability to use the mobile-phone service of the legitimate subscriber whose phone was cloned – just as if that phone had been physically stolen, except that the subscriber retained his or her phone, unaware that the phone had been cloned—at least until that subscriber received his or her next bill. The problem became so large that some carriers required the use of a PIN before making calls. Eventually, the cellular companies initiated a system called RF Fingerprinting, whereby it could determine subtle differences in the signal of one phone from another and shut down some cloned phones. Some legitimate customers had problems with this though if they made certain changes to their own phone, such as replacing the battery and/or antenna. The Oki 900 could listen in to AMPS phone-calls right out-of-the-box with no hardware modifications. Standards AMPS was originally standardized by American National Standards Institute (ANSI) as EIA/TIA/IS-3. EIA/TIA/IS-3 was superseded by EIA/TIA-553 and TIA interim standard with digital technologies, the cost of wireless service is so low that the problem of cloning has virtually disappeared. Frequency bands AMPS cellular service operated in the 850 MHz Cellular band. For each market area, the United States Federal Communications Commission (FCC) allowed two licensees (networks) known as "A" and "B" carriers. Each carrier within a market used a specified "block" of frequencies consisting of 21 control channels and 395 voice channels. Originally, the B (wireline) side license was usually owned by the local phone company, and the A (non-wireline) license was given to wireless telephone providers. At the inception of cellular in 1983, the FCC had granted each carrier within a market 333 channel pairs (666 channels total). By the late 1980s, the cellular industry's subscriber base had grown into the millions across America and it became necessary to add channels for additional capacity. In 1989, the FCC granted carriers an expansion from the previous 666 channels to the final 832 (416 pairs per carrier). The additional frequencies were from the band held in reserve for future (inevitable) expansion. These frequencies were immediately adjacent to the existing cellular band. These bands had previously been allocated to UHF TV channels 70–83. Each duplex channel was composed of 2 frequencies. 416 of these were in the 824–849 MHz range for transmissions from mobile stations to the base stations, paired with 416 frequencies in the 869–894 MHz range for transmissions from base stations to the mobile stations. Each cell site used a different subset of these channels than its neighbors to avoid interference. This significantly reduced the number of channels available at each site in real-world systems. Each AMPS channel had a one way bandwidth of 30 kHz, for a total of 60 kHz for each duplex channel. Laws were passed in the US which prohibited the FCC type acceptance and sale of any receiver which could tune the frequency ranges occupied by analog AMPS cellular services. Though the service is no longer offered, these laws remain in force (although they may no longer be enforced). Narrowband AMPS In 1991, Motorola proposed an AMPS enhancement known as narrowband AMPS (NAMPS or N-AMPS). Digital AMPS Later, many AMPS networks were partially converted to D-AMPS, often referred to as TDMA (though TDMA is a generic term that applies to many 2G cellular systems). D-AMPS, commercially deployed since 1993, was a digital, 2G standard used mainly by AT&T Mobility and U.S. Cellular in the United States, Rogers Wireless in Canada, Telcel in Mexico, Telecom Italia Mobile (TIM) in Brazil, VimpelCom in Russia, Movilnet in Venezuela, and Cellcom in Israel. In most areas, D-AMPS is no longer offered and has been replaced by more advanced digital wireless networks. Successor technologies AMPS and D-AMPS have now been phased out in favor of either CDMA2000 or GSM, which allow for higher capacity data transfers for services such as WAP, Multimedia Messaging System (MMS), and wireless Internet access. There are some phones capable of supporting AMPS, D-AMPS and GSM all in one phone (using the GAIT standard). Analog AMPS being replaced by digital In 2002, the FCC decided to no longer require A and B carriers to support AMPS service as of February 18, 2008. All AMPS carriers have converted to a digital standard such as CDMA2000 or GSM. Digital technologies such as GSM and CDMA2000 support multiple voice calls on the same channel and offer enhanced features such as two-way text messaging and data services. Unlike in the United States, the Canadian Radio-television and Telecommunications Commission (CRTC) and Industry Canada have not set any requirement for maintaining AMPS service in Canada. Rogers Wireless has dismantled their AMPS (along with IS-136) network; the networks were shut down May 31, 2007. Bell Mobility and Telus Mobility, who operated AMPS networks in Canada, announced that they would observe the same timetable as outlined by the FCC in the United States, and as a result would not begin to dismantle their AMPS networks until after February 2008. OnStar relied heavily on North American AMPS service for its subscribers because, when the system was developed, AMPS offered the most comprehensive wireless coverage in the US. In 2006, ADT asked the FCC to extend the AMPS deadline due to many of their alarm systems still using analog technology to communicate with the control centers. Cellular companies who own an A or B license (such as Verizon and Alltel) were required to provide analog service until February 18, 2008. After that point, however, most cellular companies were eager to shut down AMPS and use the remaining channels for digital services. OnStar transitioned to digital service with the help of data transport technology developed by Airbiquity, but warned customers who could not be upgraded to digital service that their service would permanently expire on January 1, 2008. Commercial deployments of AMPS by country See also History of mobile phones Citations References Interview of Joel Engel First generation mobile telecommunications History of mobile phones Mobile radio telephone systems Telecommunications-related introductions in 1983
Advanced Mobile Phone System
[ "Technology" ]
2,519
[ "Mobile telecommunications", "Mobile radio telephone systems", "First generation mobile telecommunications" ]
2,819
https://en.wikipedia.org/wiki/Aerodynamics
Aerodynamics ( aero (air) + (dynamics)) is the study of the motion of air, particularly when affected by a solid object, such as an airplane wing. It involves topics covered in the field of fluid dynamics and its subfield of gas dynamics, and is an important domain of study in aeronautics. The term aerodynamics is often used synonymously with gas dynamics, the difference being that "gas dynamics" applies to the study of the motion of all gases, and is not limited to air. The formal study of aerodynamics began in the modern sense in the eighteenth century, although observations of fundamental concepts such as aerodynamic drag were recorded much earlier. Most of the early efforts in aerodynamics were directed toward achieving heavier-than-air flight, which was first demonstrated by Otto Lilienthal in 1891. Since then, the use of aerodynamics through mathematical analysis, empirical approximations, wind tunnel experimentation, and computer simulations has formed a rational basis for the development of heavier-than-air flight and a number of other technologies. Recent work in aerodynamics has focused on issues related to compressible flow, turbulence, and boundary layers and has become increasingly computational in nature. History Modern aerodynamics only dates back to the seventeenth century, but aerodynamic forces have been harnessed by humans for thousands of years in sailboats and windmills, and images and stories of flight appear throughout recorded history, such as the Ancient Greek legend of Icarus and Daedalus. Fundamental concepts of continuum, drag, and pressure gradients appear in the work of Aristotle and Archimedes. In 1726, Sir Isaac Newton became the first person to develop a theory of air resistance, making him one of the first aerodynamicists. Dutch-Swiss mathematician Daniel Bernoulli followed in 1738 with Hydrodynamica in which he described a fundamental relationship between pressure, density, and flow velocity for incompressible flow known today as Bernoulli's principle, which provides one method for calculating aerodynamic lift. In 1757, Leonhard Euler published the more general Euler equations which could be applied to both compressible and incompressible flows. The Euler equations were extended to incorporate the effects of viscosity in the first half of the 1800s, resulting in the Navier–Stokes equations. The Navier–Stokes equations are the most general governing equations of fluid flow but are difficult to solve for the flow around all but the simplest of shapes. In 1799, Sir George Cayley became the first person to identify the four aerodynamic forces of flight (weight, lift, drag, and thrust), as well as the relationships between them, and in doing so outlined the path toward achieving heavier-than-air flight for the next century. In 1871, Francis Herbert Wenham constructed the first wind tunnel, allowing precise measurements of aerodynamic forces. Drag theories were developed by Jean le Rond d'Alembert, Gustav Kirchhoff, and Lord Rayleigh. In 1889, Charles Renard, a French aeronautical engineer, became the first person to reasonably predict the power needed for sustained flight. Otto Lilienthal, the first person to become highly successful with glider flights, was also the first to propose thin, curved airfoils that would produce high lift and low drag. Building on these developments as well as research carried out in their own wind tunnel, the Wright brothers flew the first powered airplane on December 17, 1903. During the time of the first flights, Frederick W. Lanchester, Martin Kutta, and Nikolai Zhukovsky independently created theories that connected circulation of a fluid flow to lift. Kutta and Zhukovsky went on to develop a two-dimensional wing theory. Expanding upon the work of Lanchester, Ludwig Prandtl is credited with developing the mathematics behind thin-airfoil and lifting-line theories as well as work with boundary layers. As aircraft speed increased designers began to encounter challenges associated with air compressibility at speeds near the speed of sound. The differences in airflow under such conditions lead to problems in aircraft control, increased drag due to shock waves, and the threat of structural failure due to aeroelastic flutter. The ratio of the flow speed to the speed of sound was named the Mach number after Ernst Mach who was one of the first to investigate the properties of the supersonic flow. Macquorn Rankine and Pierre Henri Hugoniot independently developed the theory for flow properties before and after a shock wave, while Jakob Ackeret led the initial work of calculating the lift and drag of supersonic airfoils. Theodore von Kármán and Hugh Latimer Dryden introduced the term transonic to describe flow speeds between the critical Mach number and Mach 1 where drag increases rapidly. This rapid increase in drag led aerodynamicists and aviators to disagree on whether supersonic flight was achievable until the sound barrier was broken in 1947 using the Bell X-1 aircraft. By the time the sound barrier was broken, aerodynamicists' understanding of the subsonic and low supersonic flow had matured. The Cold War prompted the design of an ever-evolving line of high-performance aircraft. Computational fluid dynamics began as an effort to solve for flow properties around complex objects and has rapidly grown to the point where entire aircraft can be designed using computer software, with wind-tunnel tests followed by flight tests to confirm the computer predictions. Understanding of supersonic and hypersonic aerodynamics has matured since the 1960s, and the goals of aerodynamicists have shifted from the behaviour of fluid flow to the engineering of a vehicle such that it interacts predictably with the fluid flow. Designing aircraft for supersonic and hypersonic conditions, as well as the desire to improve the aerodynamic efficiency of current aircraft and propulsion systems, continues to motivate new research in aerodynamics, while work continues to be done on important problems in basic aerodynamic theory related to flow turbulence and the existence and uniqueness of analytical solutions to the Navier–Stokes equations. Fundamental concepts Understanding the motion of air around an object (often called a flow field) enables the calculation of forces and moments acting on the object. In many aerodynamics problems, the forces of interest are the fundamental forces of flight: lift, drag, thrust, and weight. Of these, lift and drag are aerodynamic forces, i.e. forces due to air flow over a solid body. Calculation of these quantities is often founded upon the assumption that the flow field behaves as a continuum. Continuum flow fields are characterized by properties such as flow velocity, pressure, density, and temperature, which may be functions of position and time. These properties may be directly or indirectly measured in aerodynamics experiments or calculated starting with the equations for conservation of mass, momentum, and energy in air flows. Density, flow velocity, and an additional property, viscosity, are used to classify flow fields. Flow classification Flow velocity is used to classify flows according to speed regime. Subsonic flows are flow fields in which the air speed field is always below the local speed of sound. Transonic flows include both regions of subsonic flow and regions in which the local flow speed is greater than the local speed of sound. Supersonic flows are defined to be flows in which the flow speed is greater than the speed of sound everywhere. A fourth classification, hypersonic flow, refers to flows where the flow speed is much greater than the speed of sound. Aerodynamicists disagree on the precise definition of hypersonic flow. Compressible flow accounts for varying density within the flow. Subsonic flows are often idealized as incompressible, i.e. the density is assumed to be constant. Transonic and supersonic flows are compressible, and calculations that neglect the changes of density in these flow fields will yield inaccurate results. Viscosity is associated with the frictional forces in a flow. In some flow fields, viscous effects are very small, and approximate solutions may safely neglect viscous effects. These approximations are called inviscid flows. Flows for which viscosity is not neglected are called viscous flows. Finally, aerodynamic problems may also be classified by the flow environment. External aerodynamics is the study of flow around solid objects of various shapes (e.g. around an airplane wing), while internal aerodynamics is the study of flow through passages inside solid objects (e.g. through a jet engine). Continuum assumption Unlike liquids and solids, gases are composed of discrete molecules which occupy only a small fraction of the volume filled by the gas. On a molecular level, flow fields are made up of the collisions of many individual of gas molecules between themselves and with solid surfaces. However, in most aerodynamics applications, the discrete molecular nature of gases is ignored, and the flow field is assumed to behave as a continuum. This assumption allows fluid properties such as density and flow velocity to be defined everywhere within the flow. The validity of the continuum assumption is dependent on the density of the gas and the application in question. For the continuum assumption to be valid, the mean free path length must be much smaller than the length scale of the application in question. For example, many aerodynamics applications deal with aircraft flying in atmospheric conditions, where the mean free path length is on the order of micrometers and where the body is orders of magnitude larger. In these cases, the length scale of the aircraft ranges from a few meters to a few tens of meters, which is much larger than the mean free path length. For such applications, the continuum assumption is reasonable. The continuum assumption is less valid for extremely low-density flows, such as those encountered by vehicles at very high altitudes (e.g. 300,000 ft/90 km) or satellites in Low Earth orbit. In those cases, statistical mechanics is a more accurate method of solving the problem than is continuum aerodynamics. The Knudsen number can be used to guide the choice between statistical mechanics and the continuous formulation of aerodynamics. Conservation laws The assumption of a fluid continuum allows problems in aerodynamics to be solved using fluid dynamics conservation laws. Three conservation principles are used: Conservation of mass Conservation of mass requires that mass is neither created nor destroyed within a flow; the mathematical formulation of this principle is known as the mass continuity equation. Conservation of momentum The mathematical formulation of this principle can be considered an application of Newton's Second Law. Momentum within a flow is only changed by external forces, which may include both surface forces, such as viscous (frictional) forces, and body forces, such as weight. The momentum conservation principle may be expressed as either a vector equation or separated into a set of three scalar equations (x,y,z components). Conservation of energy The energy conservation equation states that energy is neither created nor destroyed within a flow, and that any addition or subtraction of energy to a volume in the flow is caused by heat transfer, or by work into and out of the region of interest. Together, these equations are known as the Navier–Stokes equations, although some authors define the term to only include the momentum equation(s). The Navier–Stokes equations have no known analytical solution and are solved in modern aerodynamics using computational techniques. Because computational methods using high speed computers were not historically available and the high computational cost of solving these complex equations now that they are available, simplifications of the Navier–Stokes equations have been and continue to be employed. The Euler equations are a set of similar conservation equations which neglect viscosity and may be used in cases where the effect of viscosity is expected to be small. Further simplifications lead to Laplace's equation and potential flow theory. Additionally, Bernoulli's equation is a solution in one dimension to both the momentum and energy conservation equations. The ideal gas law or another such equation of state is often used in conjunction with these equations to form a determined system that allows the solution for the unknown variables. Branches of aerodynamics Aerodynamic problems are classified by the flow environment or properties of the flow, including flow speed, compressibility, and viscosity. External aerodynamics is the study of flow around solid objects of various shapes. Evaluating the lift and drag on an airplane or the shock waves that form in front of the nose of a rocket are examples of external aerodynamics. Internal aerodynamics is the study of flow through passages in solid objects. For instance, internal aerodynamics encompasses the study of the airflow through a jet engine or through an air conditioning pipe. Aerodynamic problems can also be classified according to whether the flow speed is below, near or above the speed of sound. A problem is called subsonic if all the speeds in the problem are less than the speed of sound, transonic if speeds both below and above the speed of sound are present (normally when the characteristic speed is approximately the speed of sound), supersonic when the characteristic flow speed is greater than the speed of sound, and hypersonic when the flow speed is much greater than the speed of sound. Aerodynamicists disagree over the precise definition of hypersonic flow; a rough definition considers flows with Mach numbers above 5 to be hypersonic. The influence of viscosity on the flow dictates a third classification. Some problems may encounter only very small viscous effects, in which case viscosity can be considered to be negligible. The approximations to these problems are called inviscid flows. Flows for which viscosity cannot be neglected are called viscous flows. Incompressible aerodynamics An incompressible flow is a flow in which density is constant in both time and space. Although all real fluids are compressible, a flow is often approximated as incompressible if the effect of the density changes cause only small changes to the calculated results. This is more likely to be true when the flow speeds are significantly lower than the speed of sound. Effects of compressibility are more significant at speeds close to or above the speed of sound. The Mach number is used to evaluate whether the incompressibility can be assumed, otherwise the effects of compressibility must be included. Subsonic flow Subsonic (or low-speed) aerodynamics describes fluid motion in flows which are much lower than the speed of sound everywhere in the flow. There are several branches of subsonic flow but one special case arises when the flow is inviscid, incompressible and irrotational. This case is called potential flow and allows the differential equations that describe the flow to be a simplified version of the equations of fluid dynamics, thus making available to the aerodynamicist a range of quick and easy solutions. In solving a subsonic problem, one decision to be made by the aerodynamicist is whether to incorporate the effects of compressibility. Compressibility is a description of the amount of change of density in the flow. When the effects of compressibility on the solution are small, the assumption that density is constant may be made. The problem is then an incompressible low-speed aerodynamics problem. When the density is allowed to vary, the flow is called compressible. In air, compressibility effects are usually ignored when the Mach number in the flow does not exceed 0.3 (about 335 feet (102 m) per second or 228 miles (366 km) per hour at 60 °F (16 °C)). Above Mach 0.3, the problem flow should be described using compressible aerodynamics. Compressible aerodynamics According to the theory of aerodynamics, a flow is considered to be compressible if the density changes along a streamline. This means that – unlike incompressible flow – changes in density are considered. In general, this is the case where the Mach number in part or all of the flow exceeds 0.3. The Mach 0.3 value is rather arbitrary, but it is used because gas flows with a Mach number below that value demonstrate changes in density of less than 5%. Furthermore, that maximum 5% density change occurs at the stagnation point (the point on the object where flow speed is zero), while the density changes around the rest of the object will be significantly lower. Transonic, supersonic, and hypersonic flows are all compressible flows. Transonic flow The term Transonic refers to a range of flow velocities just below and above the local speed of sound (generally taken as Mach 0.8–1.2). It is defined as the range of speeds between the critical Mach number, when some parts of the airflow over an aircraft become supersonic, and a higher speed, typically near Mach 1.2, when all of the airflow is supersonic. Between these speeds, some of the airflow is supersonic, while some of the airflow is not supersonic. Supersonic flow Supersonic aerodynamic problems are those involving flow speeds greater than the speed of sound. Calculating the lift on the Concorde during cruise can be an example of a supersonic aerodynamic problem. Supersonic flow behaves very differently from subsonic flow. Fluids react to differences in pressure; pressure changes are how a fluid is "told" to respond to its environment. Therefore, since sound is, in fact, an infinitesimal pressure difference propagating through a fluid, the speed of sound in that fluid can be considered the fastest speed that "information" can travel in the flow. This difference most obviously manifests itself in the case of a fluid striking an object. In front of that object, the fluid builds up a stagnation pressure as impact with the object brings the moving fluid to rest. In fluid traveling at subsonic speed, this pressure disturbance can propagate upstream, changing the flow pattern ahead of the object and giving the impression that the fluid "knows" the object is there by seemingly adjusting its movement and is flowing around it. In a supersonic flow, however, the pressure disturbance cannot propagate upstream. Thus, when the fluid finally reaches the object it strikes it and the fluid is forced to change its properties – temperature, density, pressure, and Mach number—in an extremely violent and irreversible fashion called a shock wave. The presence of shock waves, along with the compressibility effects of high-flow velocity (see Reynolds number) fluids, is the central difference between the supersonic and subsonic aerodynamics regimes. Hypersonic flow In aerodynamics, hypersonic speeds are speeds that are highly supersonic. In the 1970s, the term generally came to refer to speeds of Mach 5 (5 times the speed of sound) and above. The hypersonic regime is a subset of the supersonic regime. Hypersonic flow is characterized by high temperature flow behind a shock wave, viscous interaction, and chemical dissociation of gas. Associated terminology The incompressible and compressible flow regimes produce many associated phenomena, such as boundary layers and turbulence. Boundary layers The concept of a boundary layer is important in many problems in aerodynamics. The viscosity and fluid friction in the air is approximated as being significant only in this thin layer. This assumption makes the description of such aerodynamics much more tractable mathematically. Turbulence In aerodynamics, turbulence is characterized by chaotic property changes in the flow. These include low momentum diffusion, high momentum convection, and rapid variation of pressure and flow velocity in space and time. Flow that is not turbulent is called laminar flow. Aerodynamics in other fields Engineering design Aerodynamics is a significant element of vehicle design, including road cars and trucks where the main goal is to reduce the vehicle drag coefficient, and racing cars, where in addition to reducing drag the goal is also to increase the overall level of downforce. Aerodynamics is also important in the prediction of forces and moments acting on sailing vessels. It is used in the design of mechanical components such as hard drive heads. Structural engineers resort to aerodynamics, and particularly aeroelasticity, when calculating wind loads in the design of large buildings, bridges, and wind turbines. The aerodynamics of internal passages is important in heating/ventilation, gas piping, and in automotive engines where detailed flow patterns strongly affect the performance of the engine. Environmental design Urban aerodynamics are studied by town planners and designers seeking to improve amenity in outdoor spaces, or in creating urban microclimates to reduce the effects of urban pollution. The field of environmental aerodynamics describes ways in which atmospheric circulation and flight mechanics affect ecosystems. Aerodynamic equations are used in numerical weather prediction. Ball-control in sports Sports in which aerodynamics are of crucial importance include soccer, table tennis, cricket, baseball, and golf, in which most players can control the trajectory of the ball using the "Magnus effect". See also Aeronautics Aerostatics Aviation Insect flight – how bugs fly List of aerospace engineering topics List of engineering topics Nose cone design Fluid dynamics Computational fluid dynamics References Further reading General aerodynamics Subsonic aerodynamics Obert, Ed (2009). . Delft; About practical aerodynamics in industry and the effects on design of aircraft. . Transonic aerodynamics Supersonic aerodynamics Hypersonic aerodynamics History of aerodynamics Aerodynamics related to engineering Ground vehicles Fixed-wing aircraft Helicopters Missiles Model aircraft Related branches of aerodynamics Aerothermodynamics Aeroelasticity Boundary layers Turbulence External links NASA's Guide to Aerodynamics. . Aerodynamics for Students Aerodynamics for Pilots (archived) Aerodynamics and Race Car Tuning (archived) Aerodynamic Related Projects. . eFluids Bicycle Aerodynamics. . Application of Aerodynamics in Formula One (F1) (archived) Aerodynamics in Car Racing. . Aerodynamics of Birds. . NASA Aerodynamics Index Aerodynamics Aerospace engineering Energy in transport
Aerodynamics
[ "Physics", "Chemistry", "Engineering" ]
4,401
[ "Aerodynamics", "Physical systems", "Transport", "Aerospace engineering", "Energy in transport", "Fluid dynamics" ]
2,822
https://en.wikipedia.org/wiki/Ash
Ash is the solid remnants of fires. Specifically, ash refers to all non-aqueous, non-gaseous residues that remain after something burns. In analytical chemistry, to analyse the mineral and metal content of chemical samples, ash is the non-gaseous, non-liquid residue after complete combustion. Ashes as the end product of incomplete combustion are mostly mineral, but usually still contain an amount of combustible organic or other oxidizable residues. The best-known type of ash is wood ash, as a product of wood combustion in campfires, fireplaces, etc. The darker the wood ashes, the higher the content of remaining charcoal from incomplete combustion. The ashes are of different types. Some ashes contain natural compounds that make soil fertile. Others have chemical compounds that can be toxic but may break up in soil from chemical changes and microorganism activity. Like soap, ash is also a disinfecting agent (alkaline). The World Health Organization recommends ash or sand as alternative for handwashing when soap is not available. Before industrialization, ash soaked in water was the primary means of obtaining potash. Natural occurrence Ash occurs naturally from any fire that burns vegetation, and may disperse in the soil to fertilise it, or clump under it for long enough to carbonise into coal. Composition The composition of the ash varies depending on the product burned and its origin. The "ash content" or "mineral content" of a product is derived its incineration under temperatures ranging from to . Wood and plant matter The composition of ash derived from wood and other plant matter varies based on plant species, parts of the plants (such as bark, trunk, or young branches with foliage), type of soil, and time of year. The composition of these ashes also differ greatly depending on mode of combustion. Wood ashes, in addition to residual carbonaceous materials (unconsumed embers, activated carbons impregnated with carbonaceous particles, tars, various gases, etc.), contain a between 20% and 50% calcium in the form of calcium oxide and are generally rich in potassium carbonate. Ashes derived from grasses, and the Gramineae family in particular, are rich in silica. The color of the ash comes from small proportions of inorganic minerals such as iron oxides and manganese. The oxidized metal elements that constitute wood ash are mostly considered alkaline. For example, ash collected from wood boilers is composed of 17–33% calcium in the form of calcium oxide () 2–6% potassium in the form of potassium oxide () 2.5–4.6% magnesium in the form of magnesium oxide () 1–6% phosphorus in the form of phosphorus pentoxide () 3% in total of oxides such as iron oxide, manganese oxide, and sodium oxide The pH of the ash is between 10 and 13, mostly due to the fact that the oxides of calcium, potassium, and sodium are strong bases. Acidic components such as carbon dioxide, phosphoric acid, silicic acid, and sulfuric acid are rarely present and, in the presence of the previously mentioned bases, are generally found in the form of salts, respectively carbonates, phosphates, silicates and sulphates. Strictly speaking, calcium and potassium salts produce the aforementioned calcium oxide (also known as quicklime) and potassium during the combustion of organic matter. But, in practice, quicklime is only obtained via lime-kiln, and potash (from potassium carbonate) or baking soda (from sodium carbonate) is extracted from the ashes. Other substances such as sulfur, chlorine, iron or sodium only appear in small quantities. Still others are rarely found in wood, such as aluminum, zinc, and boron. (depending on the trace elements drawn from the soil by the incinerated plants). Mineral content in ash depends on the species of tree burned, even in the same soil conditions. More chloride is found in conifer trees than broadleaf trees, with seven times as much found in spruces than in oak trees. There is twice as much phosphoric acid in the European aspen than in oaks and twice as much magnesium in elm trees than in the Scotch pine. Ash composition also varies by which part of the tree was burnt. Silicon and calcium salts are more abundant in bark than in wood, while potassium salts are primarily found in wood. Compositional variation also occurred based on the season in which the tree died. Specific types Cremation ashes Cremation ashes, also called cremated remains or "cremains," are the bodily remains left from cremation. They often take the form of a grey powder resembling coarse sand. While often referred to as ashes, the remains primarily consist of powdered bone fragments due to the cremation process, which eliminates the body's organic materials. People often store these ashes in containers like urns, although they are also sometimes buried or scattered in specific locations. Food ashes In food processing, mineral and ash content is used to characterize the presence of organic and inorganic components in food for monitoring quality, nutritional quantification and labeling, analyzing microbiological stability, and more. This process can be used to measure minerals like calcium, sodium, potassium, and phosphorus as well as metal content such as lead, mercury, cadmium, and aluminum. Joss paper ash Analysis of the contents of ash samples shows that joss paper burning can emit many pollutants detrimental to air quality. There is a significant amount of heavy metals in the dust fume and bottom ash, e.g., aluminium, iron, manganese, copper, lead, zinc and cadmium. "Burning of joss paper accounted for up to 42% of the atmospheric rBC [refractory black carbon] mass, higher than traffic (14-17%), crop residue (10-17%), coal (18-20%) during the Hanyi festival in northwest China", according to a 2022 study, "the overall air quality can be worsened due to the practice of uncontrolled burning of joss paper during the festival, which is not just confined to the people who do the burning," and "burning joss paper during worship activities is common in China and most Asian countries with similar traditions." Slash-and-burn ash Wildfire ash High levels of heavy metals, including lead, arsenic, cadmium, and copper were found in the ash debris following the 2007 Californian wildfires. A national clean-up campaign was organised ... In the devastating California Camp Fire (2018) that killed 85 people, lead levels increased by around 50 times in the hours following the fire at a site nearby (Chico). Zinc concentration also increased significantly in Modesto, 150 miles away. Heavy metals such as manganese and calcium were found in numerous California fires as well. Others Ashes from Stubble burning Open burning of waste Cigarette or cigar ash Incinerator bottom ash, a form of ash produced in incinerators Products of coal combustion Bottom ash Fly ash Volcanic ash, ash that consists of fragmented glass, rock, and minerals that appears during an eruption. Wood ash Other properties Aging process Global distillation Uses Fertilizer Ashes have been used since the Neolithic period as fertilizer because they are rich in minerals, especially potash and essential nutrients. They are the main fertilizer in slash-and-burn agriculture, which eventually evolved into controlled burn and forest clearing practices. People in ancient history already possessed extensive knowledge of the nutrients produced by (from social 10th textbook)(manufacturing industries )different ashes. For clay soil in particular, using ash without modification or using , ash whose minerals have been washed with water, was necessary. Laundry Because ashes contain potash, they can be used to make biodegradable laundry detergent. The demand for organic products has led to renewed interest for laundry using ash derived from wood. The French word for laundry is from the Latin word , which means a substance made from ash and used to wash laundry. This usage also developed into a small, traditional architectural structure to the west of Rhône mainstem: the , a masonry structure built with stone or cob, that looks like a cabinet and that carries dirty laundry and fireplace ash; when the is full, the laundry and ash are moved to a laundry container and boiled in water. Laundry using ash derived from wood has the benefit of being free, easy to produce, sustainable, and as efficient as standard laundry washing methods. Health effects Effect on precipitation "Particles of dust or smoke in the atmosphere are essential for precipitation. These particles, called 'condensation nuclei,' provide a surface for water vapor to condense upon. This helps water droplets gather together and become large enough to fall to the earth" Effect on climate change See also Aerosol Ash (analytical chemistry) Black carbon Carbon, basic component of ashes Carbon black Charcoal, carbon residue after heating wood mainly used as traditional fuel Cinereous, consisting of ashes, ash-colored or ash-like Coal, consisting of carbon as ash, and ash can be converted into coal Construction waste Dust | Fugitive dust Potash, a term for many useful potassium salts that traditionally derived from plant ashes, but today are typically mined from underground deposits References Combustion
Ash
[ "Chemistry" ]
1,905
[ "Combustion" ]
2,823
https://en.wikipedia.org/wiki/Antiderivative
In calculus, an antiderivative, inverse derivative, primitive function, primitive integral or indefinite integral of a continuous function is a differentiable function whose derivative is equal to the original function . This can be stated symbolically as . The process of solving for antiderivatives is called antidifferentiation (or indefinite integration), and its opposite operation is called differentiation, which is the process of finding a derivative. Antiderivatives are often denoted by capital Roman letters such as and . Antiderivatives are related to definite integrals through the second fundamental theorem of calculus: the definite integral of a function over a closed interval where the function is Riemann integrable is equal to the difference between the values of an antiderivative evaluated at the endpoints of the interval. In physics, antiderivatives arise in the context of rectilinear motion (e.g., in explaining the relationship between position, velocity and acceleration). The discrete equivalent of the notion of antiderivative is antidifference. Examples The function is an antiderivative of , since the derivative of is . Since the derivative of a constant is zero, will have an infinite number of antiderivatives, such as , etc. Thus, all the antiderivatives of can be obtained by changing the value of in , where is an arbitrary constant known as the constant of integration. The graphs of antiderivatives of a given function are vertical translations of each other, with each graph's vertical location depending upon the value . More generally, the power function has antiderivative if , and if . In physics, the integration of acceleration yields velocity plus a constant. The constant is the initial velocity term that would be lost upon taking the derivative of velocity, because the derivative of a constant term is zero. This same pattern applies to further integrations and derivatives of motion (position, velocity, acceleration, and so on). Thus, integration produces the relations of acceleration, velocity and displacement: Uses and properties Antiderivatives can be used to compute definite integrals, using the fundamental theorem of calculus: if is an antiderivative of the continuous function over the interval , then: Because of this, each of the infinitely many antiderivatives of a given function may be called the "indefinite integral" of f and written using the integral symbol with no bounds: If is an antiderivative of , and the function is defined on some interval, then every other antiderivative of differs from by a constant: there exists a number such that for all . is called the constant of integration. If the domain of is a disjoint union of two or more (open) intervals, then a different constant of integration may be chosen for each of the intervals. For instance is the most general antiderivative of on its natural domain Every continuous function has an antiderivative, and one antiderivative is given by the definite integral of with variable upper boundary: for any in the domain of . Varying the lower boundary produces other antiderivatives, but not necessarily all possible antiderivatives. This is another formulation of the fundamental theorem of calculus. There are many elementary functions whose antiderivatives, even though they exist, cannot be expressed in terms of elementary functions. Elementary functions are polynomials, exponential functions, logarithms, trigonometric functions, inverse trigonometric functions and their combinations under composition and linear combination. Examples of these nonelementary integrals are the error function the Fresnel function the sine integral the logarithmic integral function and sophomore's dream For a more detailed discussion, see also Differential Galois theory. Techniques of integration Finding antiderivatives of elementary functions is often considerably harder than finding their derivatives (indeed, there is no pre-defined method for computing indefinite integrals). For some elementary functions, it is impossible to find an antiderivative in terms of other elementary functions. To learn more, see elementary functions and nonelementary integral. There exist many properties and techniques for finding antiderivatives. These include, among others: The linearity of integration (which breaks complicated integrals into simpler ones) Integration by substitution, often combined with trigonometric identities or the natural logarithm The inverse chain rule method (a special case of integration by substitution) Integration by parts (to integrate products of functions) Inverse function integration (a formula that expresses the antiderivative of the inverse of an invertible and continuous function , in terms of and the antiderivative of ). The method of partial fractions in integration (which allows us to integrate all rational functions—fractions of two polynomials) The Risch algorithm Additional techniques for multiple integrations (see for instance double integrals, polar coordinates, the Jacobian and the Stokes' theorem) Numerical integration (a technique for approximating a definite integral when no elementary antiderivative exists, as in the case of ) Algebraic manipulation of integrand (so that other integration techniques, such as integration by substitution, may be used) Cauchy formula for repeated integration (to calculate the -times antiderivative of a function) Computer algebra systems can be used to automate some or all of the work involved in the symbolic techniques above, which is particularly useful when the algebraic manipulations involved are very complex or lengthy. Integrals which have already been derived can be looked up in a table of integrals. Of non-continuous functions Non-continuous functions can have antiderivatives. While there are still open questions in this area, it is known that: Some highly pathological functions with large sets of discontinuities may nevertheless have antiderivatives. In some cases, the antiderivatives of such pathological functions may be found by Riemann integration, while in other cases these functions are not Riemann integrable. Assuming that the domains of the functions are open intervals: A necessary, but not sufficient, condition for a function to have an antiderivative is that have the intermediate value property. That is, if is a subinterval of the domain of and is any real number between and , then there exists a between and such that . This is a consequence of Darboux's theorem. The set of discontinuities of must be a meagre set. This set must also be an F-sigma set (since the set of discontinuities of any function must be of this type). Moreover, for any meagre F-sigma set, one can construct some function having an antiderivative, which has the given set as its set of discontinuities. If has an antiderivative, is bounded on closed finite subintervals of the domain and has a set of discontinuities of Lebesgue measure 0, then an antiderivative may be found by integration in the sense of Lebesgue. In fact, using more powerful integrals like the Henstock–Kurzweil integral, every function for which an antiderivative exists is integrable, and its general integral coincides with its antiderivative. If has an antiderivative on a closed interval , then for any choice of partition if one chooses sample points as specified by the mean value theorem, then the corresponding Riemann sum telescopes to the value . However, if is unbounded, or if is bounded but the set of discontinuities of has positive Lebesgue measure, a different choice of sample points may give a significantly different value for the Riemann sum, no matter how fine the partition. See Example 4 below. Some examples Basic formulae If , then . See also Antiderivative (complex analysis) Formal antiderivative Jackson integral Lists of integrals Symbolic integration Area Notes References Further reading Introduction to Classical Real Analysis, by Karl R. Stromberg; Wadsworth, 1981 (see also) Historical Essay On Continuity Of Derivatives by Dave L. Renfro External links Wolfram Integrator — Free online symbolic integration with Mathematica Function Calculator from WIMS Integral at HyperPhysics Antiderivatives and indefinite integrals at the Khan Academy Integral calculator at Symbolab The Antiderivative at MIT Introduction to Integrals at SparkNotes Antiderivatives at Harvy Mudd College Integral calculus Linear operators in calculus
Antiderivative
[ "Mathematics" ]
1,745
[ "Integral calculus", "Calculus" ]
2,838
https://en.wikipedia.org/wiki/Acrylic%20paint
Acrylic paint is a fast-drying paint made of pigment suspended in acrylic polymer emulsion and plasticizers, silicone oils, defoamers, stabilizers, or metal soaps. Most acrylic paints are water-based, but become water-resistant when dry. Depending on how much the paint is diluted with water, or modified with acrylic gels, mediums, or pastes, the finished acrylic painting can resemble a watercolor, a gouache, or an oil painting, or it may have its own unique characteristics not attainable with other media. Water-based acrylic paints are used as latex house paints, as latex is the technical term for a suspension of polymer microparticles in water. Interior latex house paints tend to be a combination of binder (sometimes acrylic, vinyl, PVA, and others), filler, pigment, and water. Exterior latex house paints may also be a co-polymer blend, but the best exterior water-based paints are 100% acrylic, because of its elasticity and other factors. Vinyl, however, costs half of what 100% acrylic resins cost, and polyvinyl acetate (PVA) is even cheaper, so paint companies make many different combinations of them to match the market. History Otto Röhm invented acrylic resin, which was quickly transformed into acrylic paint. As early as 1934, the first usable acrylic resin dispersion was developed by German chemical company BASF, and patented by Rohm and Haas. The synthetic paint was first used in the 1940s, combining some of the properties of oil and watercolor. Between 1946 and 1949, Leonard Bocour and Sam Golden invented a solution acrylic paint under the brand Magna paint. These were mineral spirit-based paints. Water-based acrylic paints were subsequently sold as latex house paints. Soon after the water-based acrylic binders were introduced as house paints, artists and companies alike began to explore the potential of the new binders. Diego Rivera, David Alfaro Siqueiros, and José Clemente Orozco were the first ones who experimented with acrylic paint. This is because they were very impressed with the durability of the acrylic paint. Because of this, artists and companies alike began to produce Politec Acrylic Artists' Colors in Mexico in 1953. According to The Times newspaper, Lancelot Ribeiro pioneered the use of acrylic paints in the UK because of his "increasing impatience" by the 1960s over the time it took for oil paints to dry, as also its "lack of brilliance in its colour potential." He took to the new synthetic plastic bases that commercial paints were beginning to use and soon got help from manufacturers like ICI, Courtaulds, and Geigy. The companies supplied him samples of their latest paints in quantities that he was using three decades later, according to the paper. Initially, the firms thought the PVA compounds would not be needed in commercially viable quantities. But they quickly recognised the potential demand and "so Ribeiro became the godfather of generations of artists using acrylics as an alternative to oils." In 1956, José L. Gutiérrez produced Politec Acrylic Artists' Colors in Mexico, and Henry Levison of Cincinnati-based Permanent Pigments Co. produced Liquitex colors. These two product lines were the first acrylic emulsion artists' paints, with modern high-viscosity paints becoming available in the early 1960s. Meanwhile, on the other side of the globe, 1958 saw the inception of Vynol Paints Pty Ltd (now Derivan) in Australia, who started producing a water-based artist acrylic called Vynol Colour, followed by Matisse Acrylics in the 1960s. Following that development, Golden came up with a waterborne acrylic paint called "Aquatec". In 1963, George Rowney (part of Daler-Rowney since 1983) was the first manufacturer to introduce artists' acrylic paints in Europe, under the brand name "Cryla". Painting with acrylics Acrylic painters can modify the appearance, hardness, flexibility, texture, and other characteristics of the paint surface by using acrylic medium or simply by adding water. Watercolor and oil painters also use various mediums, but the range of acrylic mediums is much greater. Acrylics have the ability to bond to many different surfaces, and mediums can be used to modify their binding characteristics. Acrylics can be used on paper, canvas, and a range of other materials; however, their use on engineered woods such as medium-density fiberboard can be problematic because of the porous nature of those surfaces. In these cases, it is recommended that the surface first be sealed with an appropriate sealer. The process of sealing acrylic painting is called varnishing. Artists use removable varnishes over isolation coat to protect paintings from dust, UV, scratches, etc. This process is similar to varnishing an oil painting. Acrylics can be applied in thin layers or washes to create effects that resemble watercolors and other water-based mediums. They can also be used to build thick layers of paint — gel and molding paste are sometimes used to create paintings with relief features. Acrylic paints are also used in hobbies such as trains, cars, houses, DIY projects, and human models. People who make such models use acrylic paint to build facial features on dolls or raised details on other types of models. Wet acrylic paint is easily removed from paintbrushes and skin with water, whereas oil paints require the use of a hydrocarbon. Acrylics are the most common paints used in grattage, a surrealist technique that began to be used with the advent of this type of paint. Acrylics are used for this purpose because they easily scrape or peel from a surface. Painting techniques Acrylic artists' paints may be thinned with water or acrylic medium and used as washes in the manner of watercolor paints, but unlike watercolor the washes are not rehydratable once dry. For this reason, acrylics do not lend themselves to the color lifting techniques of gum arabic-based watercolor paints. Instead, the paint is applied in layers, sometimes diluting with water or acrylic medium to allow layers underneath to partially show through. Using an acrylic medium gives the paint more of a rich and glossy appearance, whereas using water makes the paint look more like watercolor and have a matte finish. Acrylic paints with gloss or matte finishes are common, although a satin (semi-matte) sheen is most common. Some brands exhibit a range of finishes (e.g. heavy-body paints from Golden, Liquitex, Winsor & Newton and Daler-Rowney); Politec acrylics are fully matte. As with oils, pigment amounts and particle size or shape can affect the paint sheen. Matting agents can also be added during manufacture to dull the finish. If desired, the artist can mix different media with their paints and use topcoats or varnishes to alter or unify sheen. When dry, acrylic paint is generally non-removable from a solid surface if it adheres to the surface. Water or mild solvents do not re-solubilize it, although isopropyl alcohol can lift some fresh paint films off. Toluene and acetone can remove paint films, but they do not lift paint stains very well and are not selective. The use of a solvent to remove paint may result in removal of all of the paint layers (acrylic gesso, et cetera). Oils and warm, soapy water can remove acrylic paint from skin. Acrylic paint can be removed from nonporous plastic surfaces such as miniatures or models using cleaning products such as Dettol (containing chloroxylenol 4.8% v/w). An acrylic sizing should be used to prime canvas in preparation for painting with acrylic paints, to prevent Support Induced Discoloration (SID). Acrylic paint contains surfactants that can pull up discoloration from a raw canvas, especially in transparent glazed or translucent gelled areas. Gesso alone will not stop SID; a sizing must be applied before using a gesso. The viscosity of acrylic can be successfully reduced by using suitable extenders that maintain the integrity of the paint film. There are retarders to slow drying and extend workability time, and flow releases to increase color-blending ability. Properties Grades Commercial acrylic paints come in two grades by manufacturers: Artist acrylics (professional acrylics) are created and designed to resist chemical reactions from exposure to water, ultraviolet light, and oxygen. Professional-grade acrylics have the most pigment, which allows for more medium manipulation and limits the color shift when mixed with other colors or after drying. Student acrylics have working characteristics similar to artist acrylics, but with lower pigment concentrations, less-expensive formulas, and fewer available colors. More expensive pigments are generally replicated by hues. Colors are designed to be mixed even though color strength is lower. Hues may not have exactly the same mixing characteristics as full-strength colors. Varieties Heavy body acrylics are typically found in the Artist and Student Grade paints. "Heavy Body" refers to the viscosity or thickness of the paint. They are the best choice for impasto or heavier paint applications and will hold a brush or knife stroke and even a medium stiff peak. Gel Mediums ("pigment-less paints") are also available in various viscosities and used to thicken or thin paints, as well as extend paints and add transparency. Examples of Heavy Body Acrylics are Matisse Structure Acrylic Colors, Lukas Pastos Acrylics, Liquitex Heavy Body Acrylics and Golden Heavy Body Acrylics. Medium viscosity acrylics – Fluid acrylics, Soft body acrylics, or High Flow acrylics – have a lower viscosity but generally the same pigmentation as the Heavy Body acrylics. Available in either Artist quality or Craft quality, the cost and quality vary accordingly. These paints are good for watercolor techniques, airbrush application, or when smooth coverage is desired. Fluid acrylics can be mixed with any medium to thicken them for impasto work, or to thin them for glazing applications. Examples of fluid acrylics include Lukascryl Liquid, Lukascryl Studio, Liquitex Soft Body and Golden Fluid acrylics. Open acrylics were created to address the one major difference between oil and acrylic paints: the shortened time it takes acrylic paints to dry. Designed by Golden Artist Colors, Inc. with a hydrophilic acrylic resin, these paints can take anywhere from a few hours to a few days, or even weeks, to dry completely, depending on paint thickness, support characteristics, temperature, and humidity. Iridescent, pearl and interference acrylic colors combine conventional pigments with powdered mica (aluminium silicate) or powdered bronze to achieve complex visual effects. Colors have shimmering or reflective characteristics, depending on the coarseness or fineness of the powder. Iridescent colors are used in fine arts and crafts. Acrylic gouache is like traditional gouache because it dries to a matte, opaque finish. However, unlike traditional gouache, the acrylic binder makes it water-resistant once it dries. Like craft paint, it will adhere to a variety of surfaces, not only canvas and paper. This paint is typically used by water-colorists, cartoonists, or illustrators, and for decorative or folk art applications. Examples of acrylic gouache are Lascaux Gouache and Turner Acryl Gouache. Craft acrylics can be used on surfaces besides canvas, such as wood, metal, fabrics, and ceramics. They are used in decorative painting techniques and faux finishes to decorate objects of ordinary life. Although colors can be mixed, pigments are often not specified. Each color line is formulated instead to achieve a wide range of premixed colors. Craft paints usually employ vinyl or PVA resins to increase adhesion and lower cost. Interactive acrylics are all-purpose acrylic artists' colors which have the characteristic fast-drying nature of artists' acrylics, but are formulated to allow artists to delay drying when they need more working time, or re-wet their work when they want to do more wet blending. Exterior acrylics are paints that can withstand outdoor conditions. Like craft acrylics, they adhere to many surfaces. They are more resistant to both water and ultraviolet light. This makes them the acrylic of choice for architectural murals, outdoor signs, and many faux-finishing techniques. Acrylic glass paint is water-based and semi-permanent, making it a suitable paint for temporary displays on glass windows. Acrylic enamel paint creates a smooth, hard shell. It can be oven-baked or air dried. It can be permanent if kept away from harsh conditions such as dishwashing. Differences between acrylic and oil paint The vehicle and binder of oil paints is linseed oil (or another drying oil), whereas acrylic paint has water as the vehicle for an emulsion (suspension) of acrylic polymer, which serves as the binder. Thus, oil paint is said to be "oil-based", whereas acrylic paint is "water-based" (or sometimes "water-borne"). The main practical difference between most acrylics and oil paints is the inherent drying time. Oils allow for more time to blend colors and apply even glazes over underpaintings. This slow-drying aspect of oil can be seen as an advantage for certain techniques, but it impedes an artist trying to work quickly. The fast evaporation of water from regular acrylic paint films can be slowed with the use of acrylic retarders. Retarders are generally glycol or glycerin-based additives. The addition of a retarder slows the evaporation rate of the water. Oil paints may require the use of solvents such as mineral spirits or turpentine to thin the paint and clean up. These solvents generally have some level of toxicity and can be found objectionable. Relatively recently, water-miscible oil paints have been developed for artists' use. Oil paint films can gradually yellow and lose their flexibility over time creating cracks in the paint film; the "fat over lean" rule must be observed to ensure its durability. Oil paint has a higher pigment load than acrylic paint. As linseed oil contains a smaller molecule than acrylic paint, oil paint is able to absorb substantially more pigment. Oil provides a refractive index that is less clear than acrylic dispersions, which imparts a unique "look and feel" to the resultant paint film. Not all the pigments of oil paints are available in acrylics and vice versa, as each medium has different chemical sensitivities. Some historical pigments are alkali sensitive, and therefore cannot be made in an acrylic emulsion; others are just too difficult to formulate. Approximate "hue" color formulations, that do not contain the historical pigments, are typically offered as substitutes. Because of acrylic paint's more flexible nature and more consistent drying time between layers, an artist does not have to follow the same rules of oil painting, where more medium must be applied to each layer to avoid cracking. It usually takes 10–20 minutes for one to two layers of acrylic paint to dry, depending on the brand, quality, and humidity levels of the surrounding environment. Some professional grades of acrylic paint can take 20–30 minutes or even more than an hour. Although canvas needs to be properly primed before painting with oils to prevent the paint medium from eventually rotting the canvas, acrylic can be safely applied straight to the canvas. The rapid drying of acrylic paint tends to discourage blending of color and use of wet-in-wet technique as in oil painting. Even though acrylic retarders can slow drying time to several hours, it remains a relatively fast-drying medium and adding too much acrylic retarder can prevent the paint from ever drying properly. Meanwhile, acrylic paint is very elastic, which prevents cracking from occurring. Acrylic paint's binder is acrylic polymer emulsion – as this binder dries, the paint remains flexible. Another difference between oil and acrylic paints is the versatility offered by acrylic paints. Acrylics are very useful in mixed media, allowing the use of pastel (oil and chalk), charcoal and pen (among others) on top of the dried acrylic painted surface. Mixing other bodies into the acrylic is possible—sand, rice, and even pasta may be incorporated in the artwork. Mixing artist or student grade acrylic paint with household acrylic emulsions is possible, allowing the use of premixed tints straight from the tube or tin, and thereby presenting the painter with a vast color range at their disposal. This versatility is also illustrated by the variety of additional artistic uses for acrylics. Specialized acrylics have been manufactured and used for linoblock printing (acrylic block printing ink has been produced by Derivan since the early 1980s), face painting, airbrushing, watercolor-like techniques, and fabric screen printing. Another difference between oil and acrylic paint is the cleanup. Acrylic paint can be cleaned out of a brush with any soap, while oil paint needs a specific type to be sure to get all the oil out of the brushes. Also, it is easier to let a palette with oil paint dry and then scrape the paint off, whereas one can easily clean wet acrylic paint with water. Difference between acrylic and watercolor paint The biggest difference is that acrylic paint is opaque, whereas watercolor paint is translucent in nature. Watercolors take about 5 to 15 minutes to dry while acrylics take about 10 to 20 minutes. In order to change the tone or shade of a watercolor pigment, one changes the percentage of water mixed in to the color. For brighter colors, one adds more water. For darker colors, one adds less water. In order to create lighter or darker colors with acrylic paints, one adds white or black. Another difference is that watercolors must be painted onto a porous surface, primarily watercolor paper. Acrylic paints can be used on many different surfaces. Both acrylic and watercolor are easy to clean up with water. Acrylic paint should be cleaned with soap and water immediately following use. Watercolor paint can be cleaned with just water. See also Notes and references External links Handling and Care Tips for paintings American inventions Visual arts materials Paints Watermedia
Acrylic paint
[ "Chemistry" ]
3,998
[ "Paints", "Coatings" ]
2,839
https://en.wikipedia.org/wiki/Angular%20momentum
Angular momentum (sometimes called moment of momentum or rotational momentum) is the rotational analog of linear momentum. It is an important physical quantity because it is a conserved quantity – the total angular momentum of a closed system remains constant. Angular momentum has both a direction and a magnitude, and both are conserved. Bicycles and motorcycles, flying discs, rifled bullets, and gyroscopes owe their useful properties to conservation of angular momentum. Conservation of angular momentum is also why hurricanes form spirals and neutron stars have high rotational rates. In general, conservation limits the possible motion of a system, but it does not uniquely determine it. The three-dimensional angular momentum for a point particle is classically represented as a pseudovector , the cross product of the particle's position vector (relative to some origin) and its momentum vector; the latter is in Newtonian mechanics. Unlike linear momentum, angular momentum depends on where this origin is chosen, since the particle's position is measured from it. Angular momentum is an extensive quantity; that is, the total angular momentum of any composite system is the sum of the angular momenta of its constituent parts. For a continuous rigid body or a fluid, the total angular momentum is the volume integral of angular momentum density (angular momentum per unit volume in the limit as volume shrinks to zero) over the entire body. Similar to conservation of linear momentum, where it is conserved if there is no external force, angular momentum is conserved if there is no external torque. Torque can be defined as the rate of change of angular momentum, analogous to force. The net external torque on any system is always equal to the total torque on the system; the sum of all internal torques of any system is always 0 (this is the rotational analogue of Newton's third law of motion). Therefore, for a closed system (where there is no net external torque), the total torque on the system must be 0, which means that the total angular momentum of the system is constant. The change in angular momentum for a particular interaction is called angular impulse, sometimes twirl. Angular impulse is the angular analog of (linear) impulse. Examples The trivial case of the angular momentum of a body in an orbit is given by where is the mass of the orbiting object, is the orbit's frequency and is the orbit's radius. The angular momentum of a uniform rigid sphere rotating around its axis, instead, is given by where is the sphere's mass, is the frequency of rotation and is the sphere's radius. Thus, for example, the orbital angular momentum of the Earth with respect to the Sun is about 2.66 × 1040 kg⋅m2⋅s−1, while its rotational angular momentum is about 7.05 × 1033 kg⋅m2⋅s−1. In the case of a uniform rigid sphere rotating around its axis, if, instead of its mass, its density is known, the angular momentum is given by where is the sphere's density, is the frequency of rotation and is the sphere's radius. In the simplest case of a spinning disk, the angular momentum is given by where is the disk's mass, is the frequency of rotation and is the disk's radius. If instead the disk rotates about its diameter (e.g. coin toss), its angular momentum is given by Definition in classical mechanics Just as for angular velocity, there are two special types of angular momentum of an object: the spin angular momentum is the angular momentum about the object's centre of mass, while the orbital angular momentum is the angular momentum about a chosen center of rotation. The Earth has an orbital angular momentum by nature of revolving around the Sun, and a spin angular momentum by nature of its daily rotation around the polar axis. The total angular momentum is the sum of the spin and orbital angular momenta. In the case of the Earth the primary conserved quantity is the total angular momentum of the solar system because angular momentum is exchanged to a small but important extent among the planets and the Sun. The orbital angular momentum vector of a point particle is always parallel and directly proportional to its orbital angular velocity vector ω, where the constant of proportionality depends on both the mass of the particle and its distance from origin. The spin angular momentum vector of a rigid body is proportional but not always parallel to the spin angular velocity vector Ω, making the constant of proportionality a second-rank tensor rather than a scalar. Orbital angular momentum in two dimensions Angular momentum is a vector quantity (more precisely, a pseudovector) that represents the product of a body's rotational inertia and rotational velocity (in radians/sec) about a particular axis. However, if the particle's trajectory lies in a single plane, it is sufficient to discard the vector nature of angular momentum, and treat it as a scalar (more precisely, a pseudoscalar). Angular momentum can be considered a rotational analog of linear momentum. Thus, where linear momentum is proportional to mass and linear speed angular momentum is proportional to moment of inertia and angular speed measured in radians per second. Unlike mass, which depends only on amount of matter, moment of inertia depends also on the position of the axis of rotation and the distribution of the matter. Unlike linear velocity, which does not depend upon the choice of origin, orbital angular velocity is always measured with respect to a fixed origin. Therefore, strictly speaking, should be referred to as the angular momentum relative to that center. In the case of circular motion of a single particle, we can use and to expand angular momentum as reducing to: the product of the radius of rotation and the linear momentum of the particle , where is the linear (tangential) speed. This simple analysis can also apply to non-circular motion if one uses the component of the motion perpendicular to the radius vector: where is the perpendicular component of the motion. Expanding, rearranging, and reducing, angular momentum can also be expressed, where is the length of the moment arm, a line dropped perpendicularly from the origin onto the path of the particle. It is this definition, , to which the term moment of momentum refers. Scalar angular momentum from Lagrangian mechanics Another approach is to define angular momentum as the conjugate momentum (also called canonical momentum) of the angular coordinate expressed in the Lagrangian of the mechanical system. Consider a mechanical system with a mass constrained to move in a circle of radius in the absence of any external force field. The kinetic energy of the system is And the potential energy is Then the Lagrangian is The generalized momentum "canonically conjugate to" the coordinate is defined by Orbital angular momentum in three dimensions To completely define orbital angular momentum in three dimensions, it is required to know the rate at which the position vector sweeps out angle, the direction perpendicular to the instantaneous plane of angular displacement, and the mass involved, as well as how this mass is distributed in space. By retaining this vector nature of angular momentum, the general nature of the equations is also retained, and can describe any sort of three-dimensional motion about the center of rotation – circular, linear, or otherwise. In vector notation, the orbital angular momentum of a point particle in motion about the origin can be expressed as: where is the moment of inertia for a point mass, is the orbital angular velocity of the particle about the origin, is the position vector of the particle relative to the origin, and , is the linear velocity of the particle relative to the origin, and is the mass of the particle. This can be expanded, reduced, and by the rules of vector algebra, rearranged: which is the cross product of the position vector and the linear momentum of the particle. By the definition of the cross product, the vector is perpendicular to both and . It is directed perpendicular to the plane of angular displacement, as indicated by the right-hand rule – so that the angular velocity is seen as counter-clockwise from the head of the vector. Conversely, the vector defines the plane in which and lie. By defining a unit vector perpendicular to the plane of angular displacement, a scalar angular speed results, where and where is the perpendicular component of the motion, as above. The two-dimensional scalar equations of the previous section can thus be given direction: and for circular motion, where all of the motion is perpendicular to the radius . In the spherical coordinate system the angular momentum vector expresses as Analogy to linear momentum Angular momentum can be described as the rotational analog of linear momentum. Like linear momentum it involves elements of mass and displacement. Unlike linear momentum it also involves elements of position and shape. Many problems in physics involve matter in motion about some certain point in space, be it in actual rotation about it, or simply moving past it, where it is desired to know what effect the moving matter has on the point—can it exert energy upon it or perform work about it? Energy, the ability to do work, can be stored in matter by setting it in motion—a combination of its inertia and its displacement. Inertia is measured by its mass, and displacement by its velocity. Their product, is the matter's momentum. Referring this momentum to a central point introduces a complication: the momentum is not applied to the point directly. For instance, a particle of matter at the outer edge of a wheel is, in effect, at the end of a lever of the same length as the wheel's radius, its momentum turning the lever about the center point. This imaginary lever is known as the moment arm. It has the effect of multiplying the momentum's effort in proportion to its length, an effect known as a moment. Hence, the particle's momentum referred to a particular point, is the angular momentum, sometimes called, as here, the moment of momentum of the particle versus that particular center point. The equation combines a moment (a mass turning moment arm ) with a linear (straight-line equivalent) speed . Linear speed referred to the central point is simply the product of the distance and the angular speed versus the point: another moment. Hence, angular momentum contains a double moment: Simplifying slightly, the quantity is the particle's moment of inertia, sometimes called the second moment of mass. It is a measure of rotational inertia. The above analogy of the translational momentum and rotational momentum can be expressed in vector form: for linear motion for rotation The direction of momentum is related to the direction of the velocity for linear movement. The direction of angular momentum is related to the angular velocity of the rotation. Because moment of inertia is a crucial part of the spin angular momentum, the latter necessarily includes all of the complications of the former, which is calculated by multiplying elementary bits of the mass by the squares of their distances from the center of rotation. Therefore, the total moment of inertia, and the angular momentum, is a complex function of the configuration of the matter about the center of rotation and the orientation of the rotation for the various bits. For a rigid body, for instance a wheel or an asteroid, the orientation of rotation is simply the position of the rotation axis versus the matter of the body. It may or may not pass through the center of mass, or it may lie completely outside of the body. For the same body, angular momentum may take a different value for every possible axis about which rotation may take place. It reaches a minimum when the axis passes through the center of mass. For a collection of objects revolving about a center, for instance all of the bodies of the Solar System, the orientations may be somewhat organized, as is the Solar System, with most of the bodies' axes lying close to the system's axis. Their orientations may also be completely random. In brief, the more mass and the farther it is from the center of rotation (the longer the moment arm), the greater the moment of inertia, and therefore the greater the angular momentum for a given angular velocity. In many cases the moment of inertia, and hence the angular momentum, can be simplified by, where is the radius of gyration, the distance from the axis at which the entire mass may be considered as concentrated. Similarly, for a point mass the moment of inertia is defined as, where is the radius of the point mass from the center of rotation, and for any collection of particles as the sum, Angular momentum's dependence on position and shape is reflected in its units versus linear momentum: kg⋅m2/s or N⋅m⋅s for angular momentum versus kg⋅m/s or N⋅s for linear momentum. When calculating angular momentum as the product of the moment of inertia times the angular velocity, the angular velocity must be expressed in radians per second, where the radian assumes the dimensionless value of unity. (When performing dimensional analysis, it may be productive to use orientational analysis which treats radians as a base unit, but this is not done in the International system of units). The units if angular momentum can be interpreted as torque⋅time. An object with angular momentum of can be reduced to zero angular velocity by an angular impulse of . The plane perpendicular to the axis of angular momentum and passing through the center of mass is sometimes called the invariable plane, because the direction of the axis remains fixed if only the interactions of the bodies within the system, free from outside influences, are considered. One such plane is the invariable plane of the Solar System. Angular momentum and torque Newton's second law of motion can be expressed mathematically, or force = mass × acceleration. The rotational equivalent for point particles may be derived as follows: which means that the torque (i.e. the time derivative of the angular momentum) is Because the moment of inertia is , it follows that , and which, reduces to This is the rotational analog of Newton's second law. Note that the torque is not necessarily proportional or parallel to the angular acceleration (as one might expect). The reason for this is that the moment of inertia of a particle can change with time, something that cannot occur for ordinary mass. Conservation of angular momentum General considerations A rotational analog of Newton's third law of motion might be written, "In a closed system, no torque can be exerted on any matter without the exertion on some other matter of an equal and opposite torque about the same axis." Hence, angular momentum can be exchanged between objects in a closed system, but total angular momentum before and after an exchange remains constant (is conserved). Seen another way, a rotational analogue of Newton's first law of motion might be written, "A rigid body continues in a state of uniform rotation unless acted upon by an external influence." Thus with no external influence to act upon it, the original angular momentum of the system remains constant. The conservation of angular momentum is used in analyzing central force motion. If the net force on some body is directed always toward some point, the center, then there is no torque on the body with respect to the center, as all of the force is directed along the radius vector, and none is perpendicular to the radius. Mathematically, torque because in this case and are parallel vectors. Therefore, the angular momentum of the body about the center is constant. This is the case with gravitational attraction in the orbits of planets and satellites, where the gravitational force is always directed toward the primary body and orbiting bodies conserve angular momentum by exchanging distance and velocity as they move about the primary. Central force motion is also used in the analysis of the Bohr model of the atom. For a planet, angular momentum is distributed between the spin of the planet and its revolution in its orbit, and these are often exchanged by various mechanisms. The conservation of angular momentum in the Earth–Moon system results in the transfer of angular momentum from Earth to Moon, due to tidal torque the Moon exerts on the Earth. This in turn results in the slowing down of the rotation rate of Earth, at about 65.7 nanoseconds per day, and in gradual increase of the radius of Moon's orbit, at about 3.82 centimeters per year. The conservation of angular momentum explains the angular acceleration of an ice skater as they bring their arms and legs close to the vertical axis of rotation. By bringing part of the mass of their body closer to the axis, they decrease their body's moment of inertia. Because angular momentum is the product of moment of inertia and angular velocity, if the angular momentum remains constant (is conserved), then the angular velocity (rotational speed) of the skater must increase. The same phenomenon results in extremely fast spin of compact stars (like white dwarfs, neutron stars and black holes) when they are formed out of much larger and slower rotating stars. Conservation is not always a full explanation for the dynamics of a system but is a key constraint. For example, a spinning top is subject to gravitational torque making it lean over and change the angular momentum about the nutation axis, but neglecting friction at the point of spinning contact, it has a conserved angular momentum about its spinning axis, and another about its precession axis. Also, in any planetary system, the planets, star(s), comets, and asteroids can all move in numerous complicated ways, but only so that the angular momentum of the system is conserved. Noether's theorem states that every conservation law is associated with a symmetry (invariant) of the underlying physics. The symmetry associated with conservation of angular momentum is rotational invariance. The fact that the physics of a system is unchanged if it is rotated by any angle about an axis implies that angular momentum is conserved. Relation to Newton's second law of motion While angular momentum total conservation can be understood separately from Newton's laws of motion as stemming from Noether's theorem in systems symmetric under rotations, it can also be understood simply as an efficient method of calculation of results that can also be otherwise arrived at directly from Newton's second law, together with laws governing the forces of nature (such as Newton's third law, Maxwell's equations and Lorentz force). Indeed, given initial conditions of position and velocity for every point, and the forces at such a condition, one may use Newton's second law to calculate the second derivative of position, and solving for this gives full information on the development of the physical system with time. Note, however, that this is no longer true in quantum mechanics, due to the existence of particle spin, which is angular momentum that cannot be described by the cumulative effect of point-like motions in space. As an example, consider decreasing of the moment of inertia, e.g. when a figure skater is pulling in their hands, speeding up the circular motion. In terms of angular momentum conservation, we have, for angular momentum L, moment of inertia I and angular velocity ω: Using this, we see that the change requires an energy of: so that a decrease in the moment of inertia requires investing energy. This can be compared to the work done as calculated using Newton's laws. Each point in the rotating body is accelerating, at each point of time, with radial acceleration of: Let us observe a point of mass m, whose position vector relative to the center of motion is perpendicular to the z-axis at a given point of time, and is at a distance z. The centripetal force on this point, keeping the circular motion, is: Thus the work required for moving this point to a distance dz farther from the center of motion is: For a non-pointlike body one must integrate over this, with m replaced by the mass density per unit z. This gives: which is exactly the energy required for keeping the angular momentum conserved. Note, that the above calculation can also be performed per mass, using kinematics only. Thus the phenomena of figure skater accelerating tangential velocity while pulling their hands in, can be understood as follows in layman's language: The skater's palms are not moving in a straight line, so they are constantly accelerating inwards, but do not gain additional speed because the accelerating is always done when their motion inwards is zero. However, this is different when pulling the palms closer to the body: The acceleration due to rotation now increases the speed; but because of the rotation, the increase in speed does not translate to a significant speed inwards, but to an increase of the rotation speed. Stationary-action principle In classical mechanics it can be shown that the rotational invariance of action functionals implies conservation of angular momentum. The action is defined in classical physics as a functional of positions, often represented by the use of square brackets, and the final and initial times. It assumes the following form in cartesian coordinates:where the repeated indices indicate summation over the index. If the action is invariant of an infinitesimal transformation, it can be mathematically stated as: . Under the transformation, , the action becomes: where we can employ the expansion of the terms up-to first order in : giving the following change in action: Since all rotations can be expressed as matrix exponential of skew-symmetric matrices, i.e. as where is a skew-symmetric matrix and is angle of rotation, we can express the change of coordinates due to the rotation , up-to first order of infinitesimal angle of rotation, as: Combining the equation of motion and rotational invariance of action, we get from the above equations that:Since this is true for any matrix that satisfies it results in the conservation of the following quantity: as . This corresponds to the conservation of angular momentum throughout the motion. Lagrangian formalism In Lagrangian mechanics, angular momentum for rotation around a given axis, is the conjugate momentum of the generalized coordinate of the angle around the same axis. For example, , the angular momentum around the z axis, is: where is the Lagrangian and is the angle around the z axis. Note that , the time derivative of the angle, is the angular velocity . Ordinarily, the Lagrangian depends on the angular velocity through the kinetic energy: The latter can be written by separating the velocity to its radial and tangential part, with the tangential part at the x-y plane, around the z-axis, being equal to: where the subscript i stands for the i-th body, and m, vT and ωz stand for mass, tangential velocity around the z-axis and angular velocity around that axis, respectively. For a body that is not point-like, with density ρ, we have instead: where integration runs over the area of the body, and Iz is the moment of inertia around the z-axis. Thus, assuming the potential energy does not depend on ωz (this assumption may fail for electromagnetic systems), we have the angular momentum of the ith object: We have thus far rotated each object by a separate angle; we may also define an overall angle θz by which we rotate the whole system, thus rotating also each object around the z-axis, and have the overall angular momentum: From Euler–Lagrange equations it then follows that: Since the lagrangian is dependent upon the angles of the object only through the potential, we have: which is the torque on the ith object. Suppose the system is invariant to rotations, so that the potential is independent of an overall rotation by the angle θz (thus it may depend on the angles of objects only through their differences, in the form ). We therefore get for the total angular momentum: And thus the angular momentum around the z-axis is conserved. This analysis can be repeated separately for each axis, giving conversation of the angular momentum vector. However, the angles around the three axes cannot be treated simultaneously as generalized coordinates, since they are not independent; in particular, two angles per point suffice to determine its position. While it is true that in the case of a rigid body, fully describing it requires, in addition to three translational degrees of freedom, also specification of three rotational degrees of freedom; however these cannot be defined as rotations around the Cartesian axes (see Euler angles). This caveat is reflected in quantum mechanics in the non-trivial commutation relations of the different components of the angular momentum operator. Hamiltonian formalism Equivalently, in Hamiltonian mechanics the Hamiltonian can be described as a function of the angular momentum. As before, the part of the kinetic energy related to rotation around the z-axis for the ith object is: which is analogous to the energy dependence upon momentum along the z-axis, . Hamilton's equations relate the angle around the z-axis to its conjugate momentum, the angular momentum around the same axis: The first equation gives And so we get the same results as in the Lagrangian formalism. Note, that for combining all axes together, we write the kinetic energy as: where pr is the momentum in the radial direction, and the moment of inertia is a 3-dimensional matrix; bold letters stand for 3-dimensional vectors. For point-like bodies we have: This form of the kinetic energy part of the Hamiltonian is useful in analyzing central potential problems, and is easily transformed to a quantum mechanical work frame (e.g. in the hydrogen atom problem). Angular momentum in orbital mechanics While in classical mechanics the language of angular momentum can be replaced by Newton's laws of motion, it is particularly useful for motion in central potential such as planetary motion in the solar system. Thus, the orbit of a planet in the solar system is defined by its energy, angular momentum and angles of the orbit major axis relative to a coordinate frame. In astrodynamics and celestial mechanics, a quantity closely related to angular momentum is defined as called specific angular momentum. Note that Mass is often unimportant in orbital mechanics calculations, because motion of a body is determined by gravity. The primary body of the system is often so much larger than any bodies in motion about it that the gravitational effect of the smaller bodies on it can be neglected; it maintains, in effect, constant velocity. The motion of all bodies is affected by its gravity in the same way, regardless of mass, and therefore all move approximately the same way under the same conditions. Solid bodies Angular momentum is also an extremely useful concept for describing rotating rigid bodies such as a gyroscope or a rocky planet. For a continuous mass distribution with density function ρ(r), a differential volume element dV with position vector r within the mass has a mass element dm = ρ(r)dV. Therefore, the infinitesimal angular momentum of this element is: and integrating this differential over the volume of the entire mass gives its total angular momentum: In the derivation which follows, integrals similar to this can replace the sums for the case of continuous mass. Collection of particles For a collection of particles in motion about an arbitrary origin, it is informative to develop the equation of angular momentum by resolving their motion into components about their own center of mass and about the origin. Given, is the mass of particle , is the position vector of particle w.r.t. the origin, is the velocity of particle w.r.t. the origin, is the position vector of the center of mass w.r.t. the origin, is the velocity of the center of mass w.r.t. the origin, is the position vector of particle w.r.t. the center of mass, is the velocity of particle w.r.t. the center of mass, The total mass of the particles is simply their sum, The position vector of the center of mass is defined by, By inspection, and The total angular momentum of the collection of particles is the sum of the angular momentum of each particle, Expanding , Expanding , It can be shown that (see sidebar), and therefore the second and third terms vanish, The first term can be rearranged, and total angular momentum for the collection of particles is finally, The first term is the angular momentum of the center of mass relative to the origin. Similar to , below, it is the angular momentum of one particle of mass M at the center of mass moving with velocity V. The second term is the angular momentum of the particles moving relative to the center of mass, similar to , below. The result is general—the motion of the particles is not restricted to rotation or revolution about the origin or center of mass. The particles need not be individual masses, but can be elements of a continuous distribution, such as a solid body. Rearranging equation () by vector identities, multiplying both terms by "one", and grouping appropriately, gives the total angular momentum of the system of particles in terms of moment of inertia and angular velocity , Single particle case In the case of a single particle moving about the arbitrary origin, and equations () and () for total angular momentum reduce to, Case of a fixed center of mass For the case of the center of mass fixed in space with respect to the origin, and equations () and () for total angular momentum reduce to, Angular momentum in general relativity In modern (20th century) theoretical physics, angular momentum (not including any intrinsic angular momentum – see below) is described using a different formalism, instead of a classical pseudovector. In this formalism, angular momentum is the 2-form Noether charge associated with rotational invariance. As a result, angular momentum is generally not conserved locally for general curved spacetimes, unless they have rotational symmetry; whereas globally the notion of angular momentum itself only makes sense if the spacetime is asymptotically flat. If the spacetime is only axially symmetric like for the Kerr metric, the total angular momentum is not conserved but is conserved which is related to the invariance of rotating around the symmetry-axis, where note that where is the metric, is the rest mass, is the four-velocity, and is the four-position in spherical coordinates. In classical mechanics, the angular momentum of a particle can be reinterpreted as a plane element: in which the exterior product (∧) replaces the cross product (×) (these products have similar characteristics but are nonequivalent). This has the advantage of a clearer geometric interpretation as a plane element, defined using the vectors x and p, and the expression is true in any number of dimensions. In Cartesian coordinates: or more compactly in index notation: The angular velocity can also be defined as an anti-symmetric second order tensor, with components ωij. The relation between the two anti-symmetric tensors is given by the moment of inertia which must now be a fourth order tensor: Again, this equation in L and ω as tensors is true in any number of dimensions. This equation also appears in the geometric algebra formalism, in which L and ω are bivectors, and the moment of inertia is a mapping between them. In relativistic mechanics, the relativistic angular momentum of a particle is expressed as an anti-symmetric tensor of second order: in terms of four-vectors, namely the four-position X and the four-momentum P, and absorbs the above L together with the moment of mass, i.e., the product of the relativistic mass of the particle and its centre of mass, which can be thought of as describing the motion of its centre of mass, since mass–energy is conserved. In each of the above cases, for a system of particles the total angular momentum is just the sum of the individual particle angular momenta, and the centre of mass is for the system. Angular momentum in quantum mechanics In quantum mechanics, angular momentum (like other quantities) is expressed as an operator, and its one-dimensional projections have quantized eigenvalues. Angular momentum is subject to the Heisenberg uncertainty principle, implying that at any time, only one projection (also called "component") can be measured with definite precision; the other two then remain uncertain. Because of this, the axis of rotation of a quantum particle is undefined. Quantum particles do possess a type of non-orbital angular momentum called "spin", but this angular momentum does not correspond to a spinning motion. In relativistic quantum mechanics the above relativistic definition becomes a tensorial operator. Spin, orbital, and total angular momentum The classical definition of angular momentum as can be carried over to quantum mechanics, by reinterpreting r as the quantum position operator and p as the quantum momentum operator. L is then an operator, specifically called the orbital angular momentum operator. The components of the angular momentum operator satisfy the commutation relations of the Lie algebra so(3). Indeed, these operators are precisely the infinitesimal action of the rotation group on the quantum Hilbert space. (See also the discussion below of the angular momentum operators as the generators of rotations.) However, in quantum physics, there is another type of angular momentum, called spin angular momentum, represented by the spin operator S. Spin is often depicted as a particle literally spinning around an axis, but this is a misleading and inaccurate picture: spin is an intrinsic property of a particle, unrelated to any sort of motion in space and fundamentally different from orbital angular momentum. All elementary particles have a characteristic spin (possibly zero), and almost all elementary particles have nonzero spin. For example electrons have "spin 1/2" (this actually means "spin ħ/2"), photons have "spin 1" (this actually means "spin ħ"), and pi-mesons have spin 0. Finally, there is total angular momentum J, which combines both the spin and orbital angular momentum of all particles and fields. (For one particle, .) Conservation of angular momentum applies to J, but not to L or S; for example, the spin–orbit interaction allows angular momentum to transfer back and forth between L and S, with the total remaining constant. Electrons and photons need not have integer-based values for total angular momentum, but can also have half-integer values. In molecules the total angular momentum F is the sum of the rovibronic (orbital) angular momentum N, the electron spin angular momentum S, and the nuclear spin angular momentum I. For electronic singlet states the rovibronic angular momentum is denoted J rather than N. As explained by Van Vleck, the components of the molecular rovibronic angular momentum referred to molecule-fixed axes have different commutation relations from those for the components about space-fixed axes. Quantization In quantum mechanics, angular momentum is quantized – that is, it cannot vary continuously, but only in "quantum leaps" between certain allowed values. For any system, the following restrictions on measurement results apply, where is the reduced Planck constant and is any Euclidean vector such as x, y, or z: The reduced Planck constant is tiny by everyday standards, about 10−34 J s, and therefore this quantization does not noticeably affect the angular momentum of macroscopic objects. However, it is very important in the microscopic world. For example, the structure of electron shells and subshells in chemistry is significantly affected by the quantization of angular momentum. Quantization of angular momentum was first postulated by Niels Bohr in his model of the atom and was later predicted by Erwin Schrödinger in his Schrödinger equation. Uncertainty In the definition , six operators are involved: The position operators , , , and the momentum operators , , . However, the Heisenberg uncertainty principle tells us that it is not possible for all six of these quantities to be known simultaneously with arbitrary precision. Therefore, there are limits to what can be known or measured about a particle's angular momentum. It turns out that the best that one can do is to simultaneously measure both the angular momentum vector's magnitude and its component along one axis. The uncertainty is closely related to the fact that different components of an angular momentum operator do not commute, for example . (For the precise commutation relations, see angular momentum operator.) Total angular momentum as generator of rotations As mentioned above, orbital angular momentum L is defined as in classical mechanics: , but total angular momentum J is defined in a different, more basic way: J is defined as the "generator of rotations". More specifically, J is defined so that the operator is the rotation operator that takes any system and rotates it by angle about the axis . (The "exp" in the formula refers to operator exponential.) To put this the other way around, whatever our quantum Hilbert space is, we expect that the rotation group SO(3) will act on it. There is then an associated action of the Lie algebra so(3) of SO(3); the operators describing the action of so(3) on our Hilbert space are the (total) angular momentum operators. The relationship between the angular momentum operator and the rotation operators is the same as the relationship between Lie algebras and Lie groups in mathematics. The close relationship between angular momentum and rotations is reflected in Noether's theorem that proves that angular momentum is conserved whenever the laws of physics are rotationally invariant. Angular momentum in electrodynamics When describing the motion of a charged particle in an electromagnetic field, the canonical momentum P (derived from the Lagrangian for this system) is not gauge invariant. As a consequence, the canonical angular momentum L = r × P is not gauge invariant either. Instead, the momentum that is physical, the so-called kinetic momentum (used throughout this article), is (in SI units) where e is the electric charge of the particle and A the magnetic vector potential of the electromagnetic field. The gauge-invariant angular momentum, that is kinetic angular momentum, is given by The interplay with quantum mechanics is discussed further in the article on canonical commutation relations. Angular momentum in optics In classical Maxwell electrodynamics the Poynting vector is a linear momentum density of electromagnetic field. The angular momentum density vector is given by a vector product as in classical mechanics: The above identities are valid locally, i.e. in each space point in a given moment . Angular momentum in nature and the cosmos Tropical cyclones and other related weather phenomena involve conservation of angular momentum in order to explain the dynamics. Winds revolve slowly around low pressure systems, mainly due to the coriolis effect. If the low pressure intensifies and the slowly circulating air is drawn toward the center, the molecules must speed up in order to conserve angular momentum. By the time they reach the center, the speeds become destructive. Johannes Kepler determined the laws of planetary motion without knowledge of conservation of momentum. However, not long after his discovery their derivation was determined from conservation of angular momentum. Planets move more slowly the further they are out in their elliptical orbits, which is explained intuitively by the fact that orbital angular momentum is proportional to the radius of the orbit. Since the mass does not change and the angular momentum is conserved, the velocity drops. Tidal acceleration is an effect of the tidal forces between an orbiting natural satellite (e.g. the Moon) and the primary planet that it orbits (e.g. Earth). The gravitational torque between the Moon and the tidal bulge of Earth causes the Moon to be constantly promoted to a slightly higher orbit (~3.8 cm per year) and Earth to be decelerated (by −25.858 ± 0.003″/cy²) in its rotation (the length of the day increases by ~1.7 ms per century, +2.3 ms from tidal effect and −0.6 ms from post-glacial rebound). The Earth loses angular momentum which is transferred to the Moon such that the overall angular momentum is conserved. Angular momentum in engineering and technology Examples of using conservation of angular momentum for practical advantage are abundant. In engines such as steam engines or internal combustion engines, a flywheel is needed to efficiently convert the lateral motion of the pistons to rotational motion. Inertial navigation systems explicitly use the fact that angular momentum is conserved with respect to the inertial frame of space. Inertial navigation is what enables submarine trips under the polar ice cap, but are also crucial to all forms of modern navigation. Rifled bullets use the stability provided by conservation of angular momentum to be more true in their trajectory. The invention of rifled firearms and cannons gave their users significant strategic advantage in battle, and thus were a technological turning point in history. History Isaac Newton, in the Principia, hinted at angular momentum in his examples of the first law of motion,A top, whose parts by their cohesion are perpetually drawn aside from rectilinear motions, does not cease its rotation, otherwise than as it is retarded by the air. The greater bodies of the planets and comets, meeting with less resistance in more free spaces, preserve their motions both progressive and circular for a much longer time.He did not further investigate angular momentum directly in the Principia, saying:From such kind of reflexions also sometimes arise the circular motions of bodies about their own centres. But these are cases which I do not consider in what follows; and it would be too tedious to demonstrate every particular that relates to this subject.However, his geometric proof of the law of areas is an outstanding example of Newton's genius, and indirectly proves angular momentum conservation in the case of a central force. Law of Areas Newton's derivation As a planet orbits the Sun, the line between the Sun and the planet sweeps out equal areas in equal intervals of time. This had been known since Kepler expounded his second law of planetary motion. Newton derived a unique geometric proof, and went on to show that the attractive force of the Sun's gravity was the cause of all of Kepler's laws. During the first interval of time, an object is in motion from point A to point B. Undisturbed, it would continue to point c during the second interval. When the object arrives at B, it receives an impulse directed toward point S. The impulse gives it a small added velocity toward S, such that if this were its only velocity, it would move from B to V during the second interval. By the rules of velocity composition, these two velocities add, and point C is found by construction of parallelogram BcCV. Thus the object's path is deflected by the impulse so that it arrives at point C at the end of the second interval. Because the triangles SBc and SBC have the same base SB and the same height Bc or VC, they have the same area. By symmetry, triangle SBc also has the same area as triangle SAB, therefore the object has swept out equal areas SAB and SBC in equal times. At point C, the object receives another impulse toward S, again deflecting its path during the third interval from d to D. Thus it continues to E and beyond, the triangles SAB, SBc, SBC, SCd, SCD, SDe, SDE all having the same area. Allowing the time intervals to become ever smaller, the path ABCDE approaches indefinitely close to a continuous curve. Note that because this derivation is geometric, and no specific force is applied, it proves a more general law than Kepler's second law of planetary motion. It shows that the Law of Areas applies to any central force, attractive or repulsive, continuous or non-continuous, or zero. Conservation of angular momentum in the law of areas The proportionality of angular momentum to the area swept out by a moving object can be understood by realizing that the bases of the triangles, that is, the lines from S to the object, are equivalent to the radius , and that the heights of the triangles are proportional to the perpendicular component of velocity . Hence, if the area swept per unit time is constant, then by the triangular area formula , the product and therefore the product are constant: if and the base length are decreased, and height must increase proportionally. Mass is constant, therefore angular momentum is conserved by this exchange of distance and velocity. In the case of triangle SBC, area is equal to (SB)(VC). Wherever C is eventually located due to the impulse applied at B, the product (SB)(VC), and therefore remain constant. Similarly so for each of the triangles. Another areal proof of conservation of angular momentum for any central force uses Mamikon's sweeping tangents theorem. After Newton Leonhard Euler, Daniel Bernoulli, and Patrick d'Arcy all understood angular momentum in terms of conservation of areal velocity, a result of their analysis of Kepler's second law of planetary motion. It is unlikely that they realized the implications for ordinary rotating matter. In 1736 Euler, like Newton, touched on some of the equations of angular momentum in his Mechanica without further developing them. Bernoulli wrote in a 1744 letter of a "moment of rotational motion", possibly the first conception of angular momentum as we now understand it. In 1799, Pierre-Simon Laplace first realized that a fixed plane was associated with rotation—his invariable plane. Louis Poinsot in 1803 began representing rotations as a line segment perpendicular to the rotation, and elaborated on the "conservation of moments". In 1852 Léon Foucault used a gyroscope in an experiment to display the Earth's rotation. William J. M. Rankine's 1858 Manual of Applied Mechanics defined angular momentum in the modern sense for the first time:... a line whose length is proportional to the magnitude of the angular momentum, and whose direction is perpendicular to the plane of motion of the body and of the fixed point, and such, that when the motion of the body is viewed from the extremity of the line, the radius-vector of the body seems to have right-handed rotation.In an 1872 edition of the same book, Rankine stated that "The term angular momentum was introduced by Mr. Hayward," probably referring to R.B. Hayward's article On a Direct Method of estimating Velocities, Accelerations, and all similar Quantities with respect to Axes moveable in any manner in Space with Applications, which was introduced in 1856, and published in 1864. Rankine was mistaken, as numerous publications feature the term starting in the late 18th to early 19th centuries. However, Hayward's article apparently was the first use of the term and the concept seen by much of the English-speaking world. Before this, angular momentum was typically referred to as "momentum of rotation" in English. See also References Further reading . External links "What Do a Submarine, a Rocket and a Football Have in Common? Why the prolate spheroid is the shape for success" (Scientific American, November 8, 2010) Conservation of Angular Momentum – a chapter from an online textbook Angular Momentum in a Collision Process – derivation of the three-dimensional case Angular Momentum and Rolling Motion – more momentum theory Mechanical quantities Rotation Conservation laws Moment (physics) Angular momentum
Angular momentum
[ "Physics", "Mathematics" ]
9,705
[ "Symmetry", "Physical phenomena", "Mechanical quantities", "Physical quantities", "Equations of physics", "Conservation laws", "Quantity", "Classical mechanics", "Rotation", "Motion (physics)", "Mechanics", "Angular momentum", "Moment (physics)", "Momentum", "Physics theorems" ]
2,840
https://en.wikipedia.org/wiki/Plum%20pudding%20model
The plum pudding model was the first scientific model of the atom to describe an internal structure. It was first proposed by J. J. Thomson in 1904 following his discovery of the electron in 1897, and was rendered obsolete by Ernest Rutherford's discovery of the atomic nucleus in 1911. The model tried to account for two properties of atoms then known: that there are electrons, and that atoms have no net electric charge. Logically there had to be an equal amount of positive charge to balance out the negative charge of the electrons. As Thomson had no idea as to the source of this positive charge, he tentatively proposed that it was everywhere in the atom, and that the atom was spherical. This was the mathematically simplest hypothesis to fit the available evidence, or lack thereof. In such a sphere, the negatively charged electrons would distribute themselves in a more or less even manner throughout the volume, simultaneously repelling each other while being attracted to the positive sphere's center. Despite Thomson's efforts, his model couldn't account for emission spectra and valencies. Based on experimental studies of alpha particle scattering (in the gold foil experiment), Ernest Rutherford developed an alternative model for the atom featuring a compact nucleus where the positive charge is concentrated. Thomson's model is popularly referred to as the "plum pudding model" with the notion that the electrons are distributed uniformly like raisins in a plum pudding. Neither Thomson nor his colleagues ever used this analogy. It seems to have been coined by popular science writers to make the model easier to understand for the layman. The analogy is perhaps misleading because Thomson likened the positive sphere to a liquid rather than a solid since he thought the electrons moved around in it. Significance Thomson's model marks the moment when the development of atomic theory passed from chemists to physicists. While atomic theory was widely accepted by chemists by the end of the 19th century, physicists remained skeptical because the atomic model lacked any properties which concerned their field, such as electric charge, magnetic moment, volume, or absolute mass. Thomson himself was a physicist and his atomic model was a byproduct of his investigations of cathode rays, by which he discovered electrons. Thomson hypothesized that the quantity, arrangement, and motions of electrons in the atom could explain its physical and chemical properties, such as emission spectra, valencies, reactivity, and ionization. He was on the right track, though his approach was based on classical mechanics and he did not have the insight to incorporate quantized energy into it. Background Throughout the 19th century evidence from chemistry and statistical mechanics accumulated that matter was composed of atoms. The structure of the atom was discussed, and by the end of the century the leading model was the vortex theory of the atom, proposed by William Thomson (later Lord Kelvin) in 1867. By 1890, J.J. Thomson had his own version called the "nebular atom" hypothesis, in which atoms were composed of immaterial vortices and suggested similarities between the arrangement of vortices and periodic regularity found among the chemical elements. Thomson's discovery of the electron in 1897 changed his views. Thomson called them "corpuscles" (particles), but they were more commonly called "electrons", the name G. J. Stoney had coined for the "fundamental unit quantity of electricity" in 1891. However even late in 1899, few scientists believed in subatomic particles. Another emerging scientific theme of the 19th century was the discovery and study of radioactivity. Thomson discovered the electron by studying cathode rays, and in 1900 Henri Becquerel determined that the radiation from uranium, now called beta particles, had the same charge/mass ratio as cathode rays. These beta particles were believed to be electrons travelling at high speed. The particles were used by Thomson to probe atoms to find evidence for his atomic theory. The other form of radiation critical to this era of atomic models was alpha particles. Heavier and slower than beta particles, these were the key tool used by Rutherford to find evidence against Thomson's model. In addition to the emerging atomic theory, the electron, and radiation, the last element of history was the many studies of atomic spectra published in the late 19th century. Part of the attraction of the vortex model was its possible role in describing the spectral data as vibrational responses to electromagnetic radiation. Neither Thomson's model nor its successor, Rutherford's model, made progress towards understanding atomic spectra. That would have to wait until Niels Bohr built the first quantum-based atom model. Development Thomson's model was the first to assign a specific inner structure to an atom, though his earliest descriptions did not include mathematical formulas. From 1897 through 1913, Thomson proposed a series of increasingly detailed polyelectron models for the atom. His first versions were qualitative culminating in his 1906 paper and follow on summaries. Thomson's model changed over the course of its initial publication, finally becoming a model with much more mobility containing electrons revolving in the dense field of positive charge rather than a static structure. Thomson attempted unsuccessfully to reshape his model to account for some of the major spectral lines experimentally known for several elements. 1897 Corpuscles inside atoms In a paper titled Cathode Rays, Thomson demonstrated that cathode rays are not light but made of negatively charged particles which he called corpuscles. He observed that cathode rays can be deflected by electric and magnetic fields, which does not happen with light rays. In a few paragraphs near the end of this long paper Thomson discusses the possibility that atoms were made of these corpuscles, calling them primordial atoms. Thomson believed that the intense electric field around the cathode caused the surrounding gas molecules to split up into their component corpuscles, thereby generating cathode rays. Thomson thus showed evidence that atoms were divisible, though he did not attempt to describe their structure at this point. Thomson notes that he was not the first scientist to propose that atoms are divisible, making reference to William Prout who in 1815 found that the atomic weights of various elements were multiples of hydrogen's atomic weight and hypothesised that all atoms were made of hydrogen atoms fused together. Prout's hypothesis was dismissed by chemists when by the 1830s it was found that some elements seemed to have a non-integer atomic weight—e.g. chlorine has an atomic weight of about 35.45. But the idea continued to intrigue scientists. The discrepancies were eventually explained with the discovery of isotopes in 1912. A few months after Thomson's paper appeared, George FitzGerald suggested that the corpuscle identified by Thomson from cathode rays and proposed as parts of an atom was a "free electron", as described by physicist Joseph Larmor and Hendrik Lorentz. While Thomson did not adopt the terminology, the connection convinced other scientists that cathode rays were particles, an important step in their eventual acceptance of an atomic model based on sub-atomic particles. In 1899 Thomson reiterated his atomic model in a paper that showed that negative electricity created by ultraviolet light landing on a metal (known now as the photoelectric effect) has the same mass-to-charge ratio as cathode rays; then he applied his previous method for determining the charge on ions to the negative electric particles created by ultraviolet light. He estimated that the electron's mass was 0.0014 times that of the hydrogen ion (as a fraction: ). In the conclusion of this paper he writes: 1904 Mechanical model of the atom Thomson provided his first detailed description of the atom in his 1904 paper On the Structure of the Atom. Thomson starts with a short description of his model ... the atoms of the elements consist of a number of negatively electrified corpuscles enclosed in a sphere of uniform positive electrification, ... Primarily focused on the electrons, Thomson adopted the positive sphere from Kelvin's atom model proposed a year earlier. He then gives a detailed mechanical analysis of such a system, distributing the electrons uniformly around a ring. The attraction of the positive electrification is balanced by the mutual repulsion of the electrons. His analysis focuses on stability, looking for cases where small changes in position are countered by restoring forces. After discussing his many formulae for stability he turned to analysing patterns in the number of electrons in various concentric rings of stable configurations. These regular patterns Thomson argued are analogous to the periodic law of chemistry behind the structure of the periodic table. This concept, that a model based on subatomic particles could account for chemical trends, encouraged interest in Thomson's model and influenced future work even if the details Thomson's electron assignments turned out to be incorrect. Thomson at this point believed that all the mass of the atom was carried by the electrons. This would mean that even a small atom would have to contain thousands of electrons, and the positive electrification that encapsulated them was without mass. 1905 lecture on electron arrangements In a lecture delivered to the Royal Institution of Great Britain in 1905, Thomson explained that it was too computationally difficult for him to calculate the movements of large numbers of electrons in the positive sphere, so he proposed a practical experiment. This involved magnetised pins pushed into cork discs and set afloat in a basin of water. The pins were oriented such that they repelled each other. Above the centre of the basin was suspended an electromagnet that attracted the pins. The equilibrium arrangement the pins took informed Thomson on what arrangements the electrons in an atom might take. For instance, he observed that while five pins would arrange themselves in a stable pentagon around the centre, six pins could not form a stable hexagon. Instead, one pin would move to the centre and the other five would form a pentagon around the centre pin, and this arrangement was stable. As he added more pins, they would arrange themselves in concentric rings around the centre. The experiment functioned in two dimensions instead of three, but Thomson inferred the electrons in the atom arranged themselves in concentric shells and the could move within these shells but did not move from one shell to another them except when electrons were added or subtracted from the atom. 1906 Estimating electrons per atom Before 1906 Thomson considered the atomic weight to be due to the mass of the electrons (which he continued to call "corpuscles"). Based on his own estimates of the electron mass, an atom would need tens of thousands electrons to account for the mass. In 1906 he used three different methods, X-ray scattering, beta ray absorption, or optical properties of gases, to estimate that "number of corpuscles is not greatly different from the atomic weight". This reduced the number of electrons to tens or at most a couple of hundred and that in turn meant that the positive sphere in Thomson's model contained most of the mass of the atom. This meant that Thomson's mechanical stability work from 1904 and the comparison to the periodic table were no longer valid. Moreover, the alpha particle, so important to the next advance in atomic theory by Rutherford, would no longer be viewed as an atom containing thousands of electrons. In 1907, Thomson published The Corpuscular Theory of Matter which reviewed his ideas on the atom's structure and proposed further avenues of research. In Chapter 6, he further elaborates his experiment using magnetised pins in water, providing an expanded table. For instance, if 59 pins were placed in the pool, they would arrange themselves in concentric rings of the order 20-16-13-8-2 (from outermost to innermost). In Chapter 7, Thomson summarised his 1906 results on the number of electrons in an atom. He included one important correction: he replaced the beta-particle analysis with one based on the cathode ray experiments of August Becker, giving a result in better agreement with other approaches to the problem. Experiments by other scientists in this field had shown that atoms contain far fewer electrons than Thomson previously thought. Thomson now believed the number of electrons in an atom was a small multiple of its atomic weight: "the number of corpuscles in an atom of any element is proportional to the atomic weight of the element — it is a multiple, and not a large one, of the atomic weight of the element." This meant that almost all of the atom's mass had to be carried by the positive sphere, whatever it was made of. Thomson in this book estimated that a hydrogen atom is 1,700 times heavier than an electron (the current measurement is 1,837). Thomson noted that no scientist had yet found a positively charged particle smaller than a hydrogen ion. He also wrote that the positive charge of an atom is a multiple of a basic unit of positive charge, equal to the negative charge of an electron. Thomson refused to jump to the conclusion that the basic unit of positive charge has a mass equal to that of the hydrogen ion, arguing that scientists first had to know how many electrons an atom contains. For all he could tell, a hydrogen ion might still contain a few electrons—perhaps two electrons and three units of positive charge. 1910 Multiple scattering Thomson's difficulty with beta scattering in 1906 lead him to renewed interest in the topic. He encouraged J. Arnold Crowther to experiment with beta scattering through thin foils and, in 1910, Thomson produced a new theory of beta scattering. The two innovations in this paper was the introduction of scattering from the positive sphere of the atom and analysis that multiple or compound scattering was critical to the final results. This theory and Crowther's experimental results would be confronted by Rutherford's theory and Geiger and Mardsen new experiments with alpha particles. Another innovation in Thomson's 1910 paper was that he modelled how an atom might deflect an incoming beta particle if the positive charge of the atom existed in discrete units of equal but arbitrary size, spread evenly throughout the atom, separated by empty space, with each unit having a positive charge equal to the electron's negative charge. Thomson therefore came close to deducing the existence of the proton, which was something Rutherford eventually did. In Rutherford's model of the atom, the protons are clustered in a very small nucleus, but in Thomson's alternative model, the positive units were spread throughout the atom. Thomson's 1910 scattering model In his 1910 paper "On the Scattering of rapidly moving Electrified Particles", Thomson presented equations that modelled how beta particles scatter in a collision with an atom. His work was based on beta scattering studies by James Crowther. Deflection by the positive sphere Thomson typically assumed the positive charge in the atom was uniformly distributed throughout its volume, encapsulating the electrons. In his 1910 paper, Thomson presented the following equation which isolated the effect of this positive sphere: where k is the Coulomb constant, qe is the charge of the beta particle, qg is the charge of the positive sphere, m is the mass of the beta particle, and R is the radius of the sphere. Because the atom is many thousands of times heavier than the beta particle, no correction for recoil is needed. Thomson did not explain how this equation was developed, but the historian John L. Heilbron provided an educated guess he called a "straight-line" approximation. Consider a beta particle passing through the positive sphere with its initial trajectory at a lateral distance b from the centre. The path is assumed to have a very small deflection and therefore is treated here as a straight line. Inside a sphere of uniformly distributed positive charge the force exerted on the beta particle at any point along its path through the sphere would be directed along the radius with magnitude: The component of force perpendicular to the trajectory and thus deflecting the path of the particle would be: The lateral change in momentum py is therefore The resulting angular deflection, , is given by where px is the average horizontal momentum taken to be equal to the incoming momentum. Since we already know the deflection is very small, we can treat as being equal to . To find the average deflection angle , the angle for each value of b and the corresponding L are added across the face sphere, then divided by the cross-section area. per Pythagorean theorem. This matches Thomson's formula in his 1910 paper. Deflection by the electrons Thomson modelled the collisions between a beta particle and the electrons of an atom by calculating the deflection of one collision then multiplying by a factor for the number of collisions as the particle crosses the atom. For the electrons within an arbitrary distance s of the beta particle's path, their mean distance will be . Therefore, the average deflection per electron will be where qe is the elementary charge, k is the Coulomb constant, m and v are the mass and velocity of the beta particle. The factor for the number of collisions was known to be the square root of the number of possible electrons along path. The number of electrons depends upon the density of electrons along the particle path times the path length L. The net deflection caused by all the electrons within this arbitrary cylinder of effect around the beta particle's path is where N0 is the number of electrons per unit volume and is the volume of this cylinder. Since Thomson calculated the deflection would be very small, he treats L as a straight line. Therefore where b is the distance of this chord from the centre. The mean of is given by the integral We can now replace in the equation for to obtain the mean deflection : where N is the number of electrons in the atom, equal to . Deflection by the positive charge in discrete units In his 1910 paper, Thomson proposed an alternative model in which the positive charge exists in discrete units separated by empty space, with those units being evenly distributed throughout the atom's volume. In this concept, the average scattering angle of the beta particle is given by: where σ is the ratio of the volume occupied by the positive charge to the volume of the whole atom. Thomson did not explain how he arrived at this equation. Net deflection To find the combined effect of the positive charge and the electrons on the beta particle's path, Thomson provided the following equation: Demise of the plum pudding model Thomson probed the structure of atoms through beta particle scattering, whereas his former student Ernest Rutherford was interested in alpha particle scattering. Beta particles are electrons emitted by radioactive decay, whereas alpha particles are essentially helium atoms, also emitted in process of decay. Alpha particles have considerably more momentum than beta particles and Rutherford found that matter scatters alpha particles in ways that Thomson's plum pudding model could not predict. Between 1908 and 1913, Ernest Rutherford, Hans Geiger, and Ernest Marsden collaborated on a series of experiments in which they bombarded thin metal foils with a beam of alpha particles and measured the intensity versus scattering angle of the particles. They found that the metal foil could scatter alpha particles by more than 90°. This should not have been possible according to the Thomson model: the scattering into large angles should have been negligible. The odds of a beta particle being scattered by more than 90° under such circumstances is astronomically small, and since alpha particles typically have much more momentum than beta particles, their deflection should be smaller still. The Thomson models simply could not produce electrostatic forces of sufficient strength to cause such large deflection. The charges in the Thomson model were too diffuse. This led Rutherford to discard the Thomson for a new model where the positive charge of the atom is concentrated in a tiny nucleus. Rutherford went on to make more compelling discoveries. In Thomson's model, the positive charge sphere was just an abstract component, but Rutherford found something concrete to attribute the positive charge to: particles he dubbed "protons". Whereas Thomson believed that the electron count was roughly correlated to the atomic weight, Rutherford showed that (in a neutral atom) it is exactly equal to the atomic number. Thomson hypothesised that the arrangement of the electrons in the atom somehow determined the spectral lines of a chemical element. He was on the right track, but it had nothing to do with how atoms circulated in a sphere of positive charge. Scientists eventually discovered that it had to do with how electrons absorb and release energy in discrete quantities, moving through energy levels which correspond to emission and absorption spectra. Thomson had not incorporated quantum mechanics into his atomic model, which at the time was a very new field of physics. Niels Bohr and Erwin Schroedinger later incorporated quantum mechanics into the atomic model. Rutherford's nuclear model Rutherford's 1911 paper on alpha particle scattering showed that Thomson's scattering model could not explain the large angle scattering and it showed that multiple scattering was not necessary to explain the data. However, in the years immediately following its publication few scientists took note. The scattering model predictions were not considered definitive evidence against Thomson's plum pudding model. Thomson and Rutherford had pioneered scattering as a technique to probe atoms, its reliability and value were unproven. Before Rutherford's paper the alpha particle was considered an atom, not a compact mass. It was not clear why it should be a good probe. Moreover, Rutherford's paper did not discuss the atomic electrons vital to practical problems like chemistry or atomic spectroscopy. Rutherford's nuclear model would only become widely accepted after the work of Niels Bohr. Mathematical Thomson problem The Thomson problem in mathematics seeks the optimal distribution of equal point charges on the surface of a sphere. Unlike the original Thomson atomic model, the sphere in this purely mathematical model does not have a charge, and this causes all the point charges to move to the surface of the sphere by their mutual repulsion. There is still no general solution to Thomson's original problem of how electrons arrange themselves within a sphere of positive charge. Origin of the nickname The first known writer to compare Thomson's model to a plum pudding was an anonymous reporter in an article for the British pharmaceutical magazine The Chemist and Druggist in August 1906. The analogy was never used by Thomson nor his colleagues. It seems to have been a conceit of popular science writers to make the model easier to understand for the layman. References Bibliography Foundational quantum physics Atoms Electron Periodic table Obsolete theories in physics 1904 in science
Plum pudding model
[ "Physics", "Chemistry" ]
4,521
[ "Electron", "Periodic table", "Molecular physics", "Theoretical physics", "Foundational quantum physics", "Quantum mechanics", "Atoms", "Matter", "Obsolete theories in physics" ]
2,844
https://en.wikipedia.org/wiki/History%20of%20atomic%20theory
Atomic theory is the scientific theory that matter is composed of particles called atoms. The definition of the word "atom" has changed over the years in response to scientific discoveries. Initially, it referred to a hypothetical concept of there being some fundamental particle of matter, too small to be seen by the naked eye, that could not be divided. Then the definition was refined to being the basic particles of the chemical elements, when chemists observed that elements seemed to combine with each other in ratios of small whole numbers. Then physicists discovered that these particles had an internal structure of their own and therefore perhaps did not deserve to be called "atoms", but renaming atoms would have been impractical by that point. Atomic theory is one of the most important scientific developments in history, crucial to all the physical sciences. At the start of The Feynman Lectures on Physics, physicist and Nobel laureate Richard Feynman offers the atomic hypothesis as the single most prolific scientific concept. Philosophical atomism The basic idea that matter is made up of tiny indivisible particles is an old idea that appeared in many ancient cultures. The word atom is derived from the ancient Greek word atomos, which means "uncuttable". This ancient idea was based in philosophical reasoning rather than scientific reasoning. Modern atomic theory is not based on these old concepts. In the early 19th century, the scientist John Dalton noticed that chemical substances seemed to combine with each other by discrete and consistent units of weight, and he decided to use the word atom to refer to these units. Groundwork Working in the late 17th century, Robert Boyle developed the concept of a chemical element as substance different from a compound. Near the end of the 18th century, a number of important developments in chemistry emerged without referring to the notion of an atomic theory. The first was Antoine Lavoisier who showed that compounds consist of elements in constant proportion, redefining an element as a substance which scientists could not decompose into simpler substances by experimentation. This brought an end to the ancient idea of the elements of matter being fire, earth, air, and water, which had no experimental support. Lavoisier showed that water can be decomposed into hydrogen and oxygen, which in turn he could not decompose into anything simpler, thereby proving these are elements. Lavoisier also defined the law of conservation of mass, which states that in a chemical reaction, matter does not appear nor disappear into thin air; the total mass remains the same even if the substances involved were transformed. Finally, there was the law of definite proportions, established by the French chemist Joseph Proust in 1797, which states that if a compound is broken down into its constituent chemical elements, then the masses of those constituents will always have the same proportions by weight, regardless of the quantity or source of the original compound. This definition distinguished compounds from mixtures. Dalton's law of multiple proportions John Dalton studied data gathered by himself and by other scientists. He noticed a pattern that later came to be known as the law of multiple proportions: in compounds which contain two particular elements, the amount of Element A per measure of Element B will differ across these compounds by ratios of small whole numbers. This suggested that each element combines with other elements in multiples of a basic quantity. In 1804, Dalton explained his atomic theory to his friend and fellow chemist Thomas Thomson, who published an explanation of Dalton's theory in his book A System of Chemistry in 1807. According to Thomson, Dalton's idea first occurred to him when experimenting with "olefiant gas" (ethylene) and "carburetted hydrogen gas" (methane). Dalton found that "carburetted hydrogen gas" contains twice as much hydrogen per measure of carbon as "olefiant gas", and concluded that a molecule of "olefiant gas" is one carbon atom and one hydrogen atom, and a molecule of "carburetted hydrogen gas" is one carbon atom and two hydrogen atoms. In reality, an ethylene molecule has two carbon atoms and four hydrogen atoms (C2H4), and a methane molecule has one carbon atom and four hydrogen atoms (CH4). In this particular case, Dalton was mistaken about the formulas of these compounds, and it wasn't his only mistake. But in other cases, he got their formulas right, as in the following examples: Example 1 — tin oxides: Dalton identified two types of tin oxide. One is a grey powder that Dalton referred to as "the protoxide of tin", which is 88.1% tin and 11.9% oxygen. The other is a white powder which Dalton referred to as "the deutoxide of tin", which is 78.7% tin and 21.3% oxygen. Adjusting these figures, in the grey powder there is about 13.5 g of oxygen for every 100 g of tin, and in the white powder there is about 27 g of oxygen for every 100 g of tin. 13.5 and 27 form a ratio of 1:2. These compounds are known today as tin(II) oxide (SnO) and tin(IV) oxide (SnO2). In Dalton's terminology, a "protoxide" is a molecule containing a single oxygen atom, and a "deutoxide" molecule has two. The modern equivalents of his terms would be monoxide and dioxide. Example 2 — iron oxides: Dalton identified two oxides of iron. There is one type of iron oxide that is a black powder which Dalton referred to as "the protoxide of iron", which is 78.1% iron and 21.9% oxygen. The other iron oxide is a red powder, which Dalton referred to as "the intermediate or red oxide of iron" which is 70.4% iron and 29.6% oxygen. Adjusting these figures, in the black powder there is about 28 g of oxygen for every 100 g of iron, and in the red powder there is about 42 g of oxygen for every 100 g of iron. 28 and 42 form a ratio of 2:3. These compounds are iron(II) oxide and iron(III) oxide and their formulas are Fe2O2 and Fe2O3 respectively (iron(II) oxide's formula is normally written as FeO, but here it is written as Fe2O2 to contrast it with the other oxide). Dalton described the "intermediate oxide" as being "2 atoms protoxide and 1 of oxygen", which adds up to two atoms of iron and three of oxygen. That averages to one and a half atoms of oxygen for every iron atom, putting it midway between a "protoxide" and a "deutoxide". Example 3 — nitrogen oxides: Dalton was aware of three oxides of nitrogen: "nitrous oxide", "nitrous gas", and "nitric acid". These compounds are known today as nitrous oxide, nitric oxide, and nitrogen dioxide respectively. "Nitrous oxide" is 63.3% nitrogen and 36.7% oxygen, which means it has 80 g of oxygen for every 140 g of nitrogen. "Nitrous gas" is 44.05% nitrogen and 55.95% oxygen, which means there is 160 g of oxygen for every 140 g of nitrogen. "Nitric acid" is 29.5% nitrogen and 70.5% oxygen, which means it has 320 g of oxygen for every 140 g of nitrogen. 80 g, 160 g, and 320 g form a ratio of 1:2:4. The formulas for these compounds are N2O, NO, and NO2. Dalton defined an atom as being the "ultimate particle" of a chemical substance, and he used the term "compound atom" to refer to "ultimate particles" which contain two or more elements. This is inconsistent with the modern definition, wherein an atom is the basic particle of a chemical element and a molecule is an agglomeration of atoms. The term "compound atom" was confusing to some of Dalton's contemporaries as the word "atom" implies indivisibility, but he responded that if a carbon dioxide "atom" is divided, it ceases to be carbon dioxide. The carbon dioxide "atom" is indivisible in the sense that it cannot be divided into smaller carbon dioxide particles. Dalton made the following assumptions on how "elementary atoms" combined to form "compound atoms" (what we today refer to as molecules). When two elements can only form one compound, he assumed it was one atom of each, which he called a "binary compound". If two elements can form two compounds, the first compound is a binary compound and the second is a "ternary compound" consisting of one atom of the first element and two of the second. If two elements can form three compounds between them, then the third compound is a "quaternary" compound containing one atom of the first element and three of the second. Dalton thought that water was a "binary compound", i.e. one hydrogen atom and one oxygen atom. Dalton did not know that in their natural gaseous state, the ultimate particles of oxygen, nitrogen, and hydrogen exist in pairs (O2, N2, and H2). Nor was he aware of valencies. These properties of atoms were discovered later in the 19th century. Because atoms were too small to be directly weighed using the methods of the 19th century, Dalton instead expressed the weights of the myriad atoms as multiples of the hydrogen atom's weight, which Dalton knew was the lightest element. By his measurements, 7 grams of oxygen will combine with 1 gram of hydrogen to make 8 grams of water with nothing left over, and assuming a water molecule to be one oxygen atom and one hydrogen atom, he concluded that oxygen's atomic weight is 7. In reality it is 16. Aside from the crudity of early 19th century measurement tools, the main reason for this error was that Dalton didn't know that the water molecule in fact has two hydrogen atoms, not one. Had he known, he would have doubled his estimate to a more accurate 14. This error was corrected in 1811 by Amedeo Avogadro. Avogadro proposed that equal volumes of any two gases, at equal temperature and pressure, contain equal numbers of molecules (in other words, the mass of a gas's particles does not affect the volume that it occupies). Avogadro's hypothesis, now usually called Avogadro's law, provided a method for deducing the relative weights of the molecules of gaseous elements, for if the hypothesis is correct relative gas densities directly indicate the relative weights of the particles that compose the gases. This way of thinking led directly to a second hypothesis: the particles of certain elemental gases were pairs of atoms, and when reacting chemically these molecules often split in two. For instance, the fact that two liters of hydrogen will react with just one liter of oxygen to produce two liters of water vapor (at constant pressure and temperature) suggested that a single oxygen molecule splits in two in order to form two molecules of water. The formula of water is H2O, not HO. Avogadro measured oxygen's atomic weight to be 15.074. Opposition to atomic theory Dalton's atomic theory attracted widespread interest but not everyone accepted it at first. The law of multiple proportions was shown not to be a universal law when it came to organic substances, whose molecules can be quite large. For instance, in oleic acid there is 34 g of hydrogen for every 216 g of carbon, and in methane there is 72 g of hydrogen for every 216 g of carbon. 34 and 72 form a ratio of 17:36, which is not a ratio of small whole numbers. We know now that carbon-based substances can have very large molecules, larger than any the other elements can form. Oleic acid's formula is C18H34O2 and methane's is CH4. The law of multiple proportions by itself was not complete proof, and atomic theory was not universally accepted until the end of the 19th century. One problem was the lack of uniform nomenclature. The word "atom" implied indivisibility, but Dalton defined an atom as being the ultimate particle of any chemical substance, not just the elements or even matter per se. This meant that "compound atoms" such as carbon dioxide could be divided, as opposed to "elementary atoms". Dalton disliked the word "molecule", regarding it as "diminutive". Amedeo Avogadro did the opposite: he exclusively used the word "molecule" in his writings, eschewing the word "atom", instead using the term "elementary molecule". Jöns Jacob Berzelius used the term "organic atoms" to refer to particles containing three or more elements, because he thought this only existed in organic compounds. Jean-Baptiste Dumas used the terms "physical atoms" and "chemical atoms"; a "physical atom" was a particle that cannot be divided by physical means such as temperature and pressure, and a "chemical atom" was a particle that could not be divided by chemical reactions. The modern definitions of atom and molecule—an atom being the basic particle of an element, and a molecule being an agglomeration of atoms—were established in the late half of the 19th century. A key event was the Karlsruhe Congress in Germany in 1860. As the first international congress of chemists, its goal was to establish some standards in the community. A major proponent of the modern distinction between atoms and molecules was Stanislao Cannizzaro. Cannizzaro criticized past chemists such as Berzelius for not accepting that the particles of certain gaseous elements are actually pairs of atoms, which led to mistakes in their formulation of certain compounds. Berzelius believed that hydrogen gas and chlorine gas particles are solitary atoms. But he observed that when one liter of hydrogen reacts with one liter of chlorine, they form two liters of hydrogen chloride instead of one. Berzelius decided that Avogadro's law does not apply to compounds. Cannizzaro preached that if scientists just accepted the existence of single-element molecules, such discrepancies in their findings would be easily resolved. But Berzelius did not even have a word for that. Berzelius used the term "elementary atom" for a gas particle which contained just one element and "compound atom" for particles which contained two or more elements, but there was nothing to distinguish H2 from H since Berzelius did not believe in H2. So Cannizzaro called for a redefinition so that scientists could understand that a hydrogen molecule can split into two hydrogen atoms in the course of a chemical reaction. A second objection to atomic theory was philosophical. Scientists in the 19th century had no way of directly observing atoms. They inferred the existence of atoms through indirect observations, such as Dalton's law of multiple proportions. Some scientists adopted positions aligned with the philosophy of positivism, arguing that scientists should not attempt to deduce the deeper reality of the universe, but only systemize what patterns they could directly observe. This generation of anti-atomists can be grouped in two camps. The "equivalentists", like Marcellin Berthelot, believed the theory of equivalent weights was adequate for scientific purposes. This generalization of Proust's law of definite proportions summarized observations. For example, 1 gram of hydrogen will combine with 8 grams of oxygen to form 9 grams of water, therefore the "equivalent weight" of oxygen is 8 grams. The "energeticist", like Ernst Mach and Wilhelm Ostwald, were philosophically opposed to hypothesis about reality altogether. In their view, only energy as part of thermodynamics should be the basis of physical models. These positions were eventually quashed by two important advancements that happened later in the 19th century: the development of the periodic table and the discovery that molecules have an internal architecture that determines their properties. Isomerism Scientists discovered some substances have the exact same chemical content but different properties. For instance, in 1827, Friedrich Wöhler discovered that silver fulminate and silver cyanate are both 107 parts silver, 12 parts carbon, 14 parts nitrogen, and 16 parts oxygen (we now know their formulas as both AgCNO). In 1830 Jöns Jacob Berzelius introduced the term isomerism to describe the phenomenon. In 1860, Louis Pasteur hypothesized that the molecules of isomers might have the same set of atoms but in different arrangements. In 1874, Jacobus Henricus van 't Hoff proposed that the carbon atom bonds to other atoms in a tetrahedral arrangement. Working from this, he explained the structures of organic molecules in such a way that he could predict how many isomers a compound could have. Consider, for example, pentane (C5H12). In van 't Hoff's way of modelling molecules, there are three possible configurations for pentane, and scientists did go on to discover three and only three isomers of pentane. Isomerism was not something that could be fully explained by alternative theories to atomic theory, such as radical theory and the theory of types. Mendeleev's periodic table Dmitrii Mendeleev noticed that when he arranged the elements in a row according to their atomic weights, there was a certain periodicity to them. For instance, the second element, lithium, had similar properties to the ninth element, sodium, and the sixteenth element, potassium — a period of seven. Likewise, beryllium, magnesium, and calcium were similar and all were seven places apart from each other on Mendeleev's table. Using these patterns, Mendeleev predicted the existence and properties of new elements, which were later discovered in nature: scandium, gallium, and germanium. Moreover, the periodic table could predict how many atoms of other elements that an atom could bond with — e.g., germanium and carbon are in the same group on the table and their atoms both combine with two oxygen atoms each (GeO2 and CO2). Mendeleev found these patterns validated atomic theory because it showed that the elements could be categorized by their atomic weight. Inserting a new element into the middle of a period would break the parallel between that period and the next, and would also violate Dalton's law of multiple proportions. The elements on the periodic table were originally arranged in order of increasing atomic weight. However, in a number of places chemists chose to swap the positions of certain adjacent elements so that they appeared in a group with other elements with similar properties. For instance, tellurium is placed before iodine even though tellurium is heavier (127.6 vs 126.9) so that iodine can be in the same column as the other halogens. The modern periodic table is based on atomic number, which is equivalent to the nuclear charge, a change had to wait for the discovery of the nucleus. In addition, an entire row of the table was not shown because the noble gases had not been discovered when Mendeleev devised his table. Statistical mechanics In 1738, Swiss physicist and mathematician Daniel Bernoulli postulated that the pressure of gases and heat were both caused by the underlying motion of particles. Using his model he could predict the ideal gas law at constant temperature and suggested that the temperature was proportional to the velocity of the particles. These results were largely ignored for a century. James Clerk Maxwell, a vocal proponent of atomism, revived the kinetic theory in 1860 and 1867. His key insight was that the velocity of particles in a gas would vary around an average value, introducing the concept of a distribution function. Ludwig Boltzmann and Rudolf Clausius expanded his work on gases and the laws of thermodynamics especially the second law relating to entropy. In the 1870s, Josiah Willard Gibbs extended the laws of entropy and thermodynamics and coined the term "statistical mechanics." Boltzmann defended the atomistic hypothesis against major detractors from the time like Ernst Mach or energeticists like Wilhelm Ostwald, who considered that energy was the elementary quantity of reality. At the beginning of the 20th century, Albert Einstein independently reinvented Gibbs' laws, because they had only been printed in an obscure American journal. Einstein later commented that had he known of Gibbs' work, he would "not have published those papers at all, but confined myself to the treatment of some few points [that were distinct]." All of statistical mechanics and the laws of heat, gas, and entropy took the existence of atoms as a necessary postulate. Brownian motion In 1827, the British botanist Robert Brown observed that dust particles inside pollen grains floating in water constantly jiggled about for no apparent reason. In 1905, Einstein theorized that this Brownian motion was caused by the water molecules continuously knocking the grains about, and developed a mathematical model to describe it. This model was validated experimentally in 1908 by French physicist Jean Perrin, who used Einstein's equations to measure the size of atoms. Discovery of the electron Atoms were thought to be the smallest possible division of matter until 1899 when J. J. Thomson discovered the electron through his work on cathode rays. A Crookes tube is a sealed glass container in which two electrodes are separated by a vacuum. When a voltage is applied across the electrodes, cathode rays are generated, creating a glowing patch where they strike the glass at the opposite end of the tube. Through experimentation, Thomson discovered that the rays could be deflected by electric fields and magnetic fields, which meant that these rays were not a form of light but were composed of very light charged particles, and their charge was negative. Thomson called these particles "corpuscles". He measured their mass-to-charge ratio to be several orders of magnitude smaller than that of the hydrogen atom, the smallest atom. This ratio was the same regardless of what the electrodes were made of and what the trace gas in the tube was. In contrast to those corpuscles, positive ions created by electrolysis or X-ray radiation had mass-to-charge ratios that varied depending on the material of the electrodes and the type of gas in the reaction chamber, indicating they were different kinds of particles. In 1898, Thomson measured the charge on ions to be roughly 6 × 10−10 electrostatic units (2 × 10−19 Coulombs). In 1899, he showed that negative electricity created by ultraviolet light landing on a metal (known now as the photoelectric effect) has the same mass-to-charge ratio as cathode rays; then he applied his previous method for determining the charge on ions to the negative electric particles created by ultraviolet light. By this combination he showed that electron's mass was 0.0014 times that of hydrogen ions. These "corpuscles" were so light yet carried so much charge that Thomson concluded they must be the basic particles of electricity, and for that reason other scientists decided that these "corpuscles" should instead be called electrons following an 1894 suggestion by George Johnstone Stoney for naming the basic unit of electrical charge. In 1904, Thomson published a paper describing a new model of the atom. Electrons reside within atoms, and they transplant themselves from one atom to the next in a chain in the action of an electrical current. When electrons do not flow, their negative charge logically must be balanced out by some source of positive charge within the atom so as to render the atom electrically neutral. Having no clue as to the source of this positive charge, Thomson tentatively proposed that the positive charge was everywhere in the atom, the atom being shaped like a sphere—this was the mathematically simplest model to fit the available evidence (or lack of it). The balance of electrostatic forces would distribute the electrons throughout this sphere in a more or less even manner. Thomson further explained that ions are atoms that have a surplus or shortage of electrons. Thomson's model is popularly known as the plum pudding model, based on the idea that the electrons are distributed throughout the sphere of positive charge with the same density as raisins in a plum pudding. Neither Thomson nor his colleagues ever used this analogy. It seems to have been a conceit of popular science writers. The analogy suggests that the positive sphere is like a solid, but Thomson likened it to a liquid, as he proposed that the electrons moved around in it in patterns governed by the electrostatic forces. More to the point, the positive electrification in Thomson's model was an abstraction, he did not propose anything concrete like a particle. Thomson's model was incomplete, it could not predict any of the known properties of the atom such as emission spectra or valencies. In 1906, Robert A. Millikan and Harvey Fletcher performed the oil drop experiment in which they measured the charge of an electron to be about -1.6 × 10−19, a value now defined as -1 e. Since the hydrogen ion and the electron were known to be indivisible and a hydrogen atom is neutral in charge, it followed that the positive charge in hydrogen was equal to this value, i.e. 1 e. Discovery of the nucleus Thomson's plum pudding model was challenged in 1911 by one of his former students, Ernest Rutherford, who presented a new model to explain new experimental data. The new model proposed a concentrated center of charge and mass that was later dubbed the atomic nucleus. Ernest Rutherford and his colleagues Hans Geiger and Ernest Marsden came to have doubts about the Thomson model after they encountered difficulties when they tried to build an instrument to measure the charge-to-mass ratio of alpha particles (these are positively-charged particles emitted by certain radioactive substances such as radium). The alpha particles were being scattered by the air in the detection chamber, which made the measurements unreliable. Thomson had encountered a similar problem in his work on cathode rays, which he solved by creating a near-perfect vacuum in his instruments. Rutherford didn't think he'd run into this same problem because alpha particles usually have much more momentum than electrons. According to Thomson's model of the atom, the positive charge in the atom is not concentrated enough to produce an electric field strong enough to deflect an alpha particle. Yet there was scattering, so Rutherford and his colleagues decided to investigate this scattering carefully. Between 1908 and 1913, Rutherford and his colleagues performed a series of experiments in which they bombarded thin foils of metal with a beam of alpha particles. They spotted alpha particles being deflected by angles greater than 90°. According to Thomson's model, all of the alpha particles should have passed through with negligible deflection. Rutherford deduced that the positive charge of the atom is not distributed throughout the atom's volume as Thomson believed, but is concentrated in a tiny nucleus at the center. This nucleus also carries most of the atom's mass. Only such an intense concentration of charge, anchored by its high mass, could produce an electric field strong enough to deflect the alpha particles as observed. Rutherford's model, being supported primarily by scattering data unfamiliar to many scientists, did not catch on until Niels Bohr joined Rutherford's lab and developed a new model for the electrons. Rutherford model predicted that the scattering of alpha particles would be proportional to the square of the atomic charge. Geiger and Marsden's based their analysis on setting the charge to half of the atomic weight of the foil's material (gold, aluminium, etc.). Amateur physicist Antonius van den Broek noted that there was a more precise relation between the charge and the element's numeric sequence in the order of atomic weights. The sequence number came be called the atomic number and it replaced atomic weight in organizing the periodic table. Bohr model Rutherford deduced the existence of the atomic nucleus through his experiments but he had nothing to say about how the electrons were arranged around it. In 1912, Niels Bohr joined Rutherford's lab and began his work on a quantum model of the atom. Max Planck in 1900 and Albert Einstein in 1905 had postulated that light energy is emitted or absorbed in discrete amounts known as quanta (singular, quantum). This led to a series of atomic models with some quantum aspects, such as that of Arthur Erich Haas in 1910 and the 1912 John William Nicholson atomic model with quantized angular momentum as h/2. The dynamical structure of these models was still classical, but in 1913, Bohr abandon the classical approach. He started his Bohr model of the atom with a quantum hypothesis: an electron could only orbit the nucleus in particular circular orbits with fixed angular momentum and energy, its distance from the nucleus (i.e., their radii) being proportional to its energy. Under this model an electron could not lose energy in a continuous manner; instead, it could only make instantaneous "quantum leaps" between the fixed energy levels. When this occurred, light was emitted or absorbed at a frequency proportional to the change in energy (hence the absorption and emission of light in discrete spectra). In a trilogy of papers Bohr described and applied his model to derive the Balmer series of lines in the atomic spectrum of hydrogen and the related spectrum of He+. He also used he model to describe the structure of the periodic table and aspects of chemical bonding. Together these results lead to Bohr's model being widely accepted by the end of 1915. Bohr's model was not perfect. It could only predict the spectral lines of hydrogen, not those of multielectron atoms. Worse still, it could not even account for all features of the hydrogen spectrum: as spectrographic technology improved, it was discovered that applying a magnetic field caused spectral lines to multiply in a way that Bohr's model couldn't explain. In 1916, Arnold Sommerfeld added elliptical orbits to the Bohr model to explain the extra emission lines, but this made the model very difficult to use, and it still couldn't explain more complex atoms. Discovery of isotopes While experimenting with the products of radioactive decay, in 1913 radiochemist Frederick Soddy discovered that there appeared to be more than one variety of some elements. The term isotope was coined by Margaret Todd as a suitable name for these varieties. That same year, J. J. Thomson conducted an experiment in which he channeled a stream of neon ions through magnetic and electric fields, striking a photographic plate at the other end. He observed two glowing patches on the plate, which suggested two different deflection trajectories. Thomson concluded this was because some of the neon ions had a different mass. The nature of this differing mass would later be explained by the discovery of neutrons in 1932: all atoms of the same element contain the same number of protons, while different isotopes have different numbers of neutrons. Discovery of the proton Back in 1815, William Prout observed that the atomic weights of the known elements were multiples of hydrogen's atomic weight, so he hypothesized that all atoms are agglomerations of hydrogen, a particle which he dubbed "the protyle". Prout's hypothesis was put into doubt when some elements were found to deviate from this pattern—e.g. chlorine atoms on average weigh 35.45 daltons—but when isotopes were discovered in 1913, Prout's observation gained renewed attention. In 1898, J. J. Thomson found that the positive charge of a hydrogen ion was equal to the negative charge of a single electron. In an April 1911 paper concerning his studies on alpha particle scattering, Ernest Rutherford estimated that the charge of an atomic nucleus, expressed as a multiplier of hydrogen's nuclear charge (qe), is roughly half the atom's atomic weight. In June 1911, Van den Broek noted that on the periodic table, each successive chemical element increased in atomic weight on average by 2, which in turn suggested that each successive element's nuclear charge increased by 1 qe. In 1913, van den Broek further proposed that the electric charge of an atom's nucleus, expressed as a multiplier of the elementary charge, is equal to the element's sequential position on the periodic table. Rutherford defined this position as being the element's atomic number. In 1913, Henry Moseley measured the X-ray emissions of all the elements on the periodic table and found that the frequency of the X-ray emissions was a mathematical function of the element's atomic number and the charge of a hydrogen nucleus . In 1917 Rutherford bombarded nitrogen gas with alpha particles and observed hydrogen ions being emitted from the gas. Rutherford concluded that the alpha particles struck the nuclei of the nitrogen atoms, causing hydrogen ions to split off. These observations led Rutherford to conclude that the hydrogen nucleus was a singular particle with a positive charge equal to that of the electron's negative charge. The name "proton" was suggested by Rutherford at an informal meeting of fellow physicists in Cardiff in 1920. The charge number of an atomic nucleus was found to be equal to the element's ordinal position on the periodic table. The nuclear charge number thus provided a simple and clear-cut way of distinguishing the chemical elements from each other, as opposed to Lavoisier's classic definition of a chemical element being a substance that cannot be broken down into simpler substances by chemical reactions. The charge number or proton number was thereafter referred to as the atomic number of the element. In 1923, the International Committee on Chemical Elements officially declared the atomic number to be the distinguishing quality of a chemical element. During the 1920s, some writers defined the atomic number as being the number of "excess protons" in a nucleus. Before the discovery of the neutron, scientists believed that the atomic nucleus contained a number of "nuclear electrons" which cancelled out the positive charge of some of its protons. This explained why the atomic weights of most atoms were higher than their atomic numbers. Helium, for instance, was thought to have four protons and two nuclear electrons in the nucleus, leaving two excess protons and a net nuclear charge of 2+. After the neutron was discovered, scientists realized the helium nucleus in fact contained two protons and two neutrons. Discovery of the neutron Physicists in the 1920s believed that the atomic nucleus contained protons plus a number of "nuclear electrons" that reduced the overall charge. These "nuclear electrons" were distinct from the electrons that orbited the nucleus. This incorrect hypothesis would have explained why the atomic numbers of the elements were less than their atomic weights, and why radioactive elements emit electrons (beta radiation) in the process of nuclear decay. Rutherford even hypothesized that a proton and an electron could bind tightly together into a "neutral doublet". Rutherford wrote that the existence of such "neutral doublets" moving freely through space would provide a more plausible explanation for how the heavier elements could have formed in the genesis of the Universe, given that it is hard for a lone proton to fuse with a large atomic nucleus because of the repulsive electric field. In 1928, Walter Bothe observed that beryllium emitted a highly penetrating, electrically neutral radiation when bombarded with alpha particles. It was later discovered that this radiation could knock hydrogen atoms out of paraffin wax. Initially it was thought to be high-energy gamma radiation, since gamma radiation had a similar effect on electrons in metals, but James Chadwick found that the ionization effect was too strong for it to be due to electromagnetic radiation, so long as energy and momentum were conserved in the interaction. In 1932, Chadwick exposed various elements, such as hydrogen and nitrogen, to the mysterious "beryllium radiation", and by measuring the energies of the recoiling charged particles, he deduced that the radiation was actually composed of electrically neutral particles which could not be massless like the gamma ray, but instead were required to have a mass similar to that of a proton. Chadwick called this new particle "the neutron" and believed that it to be a proton and electron fused together because the neutron had about the same mass as a proton and an electron's mass is negligible by comparison. Neutrons are not in fact a fusion of a proton and an electron. Modern quantum mechanical models In 1924, Louis de Broglie proposed that all particles—particularly subatomic particles such as electrons—have an associated wave. Erwin Schrödinger, fascinated by this idea, developed an equation that describes an electron as a wave function instead of a point. This approach predicted many of the spectral phenomena that Bohr's model failed to explain, but it was difficult to visualize, and faced opposition. One of its critics, Max Born, proposed instead that Schrödinger's wave function did not describe the physical extent of an electron (like a charge distribution in classical electromagnetism), but rather gave the probability that an electron would, when measured, be found at a particular point. This reconciled the ideas of wave-like and particle-like electrons: the behavior of an electron, or of any other subatomic entity, has both wave-like and particle-like aspects, and whether one aspect or the other is observed depend upon the experiment. A consequence of describing particles as waveforms rather than points is that it is mathematically impossible to calculate with precision both the position and momentum of a particle at a given point in time. This became known as the uncertainty principle, a concept first introduced by Werner Heisenberg in 1927. Schrödinger's wave model for hydrogen replaced Bohr's model, with its neat, clearly defined circular orbits. The modern model of the atom describes the positions of electrons in an atom in terms of probabilities. An electron can potentially be found at any distance from the nucleus, but, depending on its energy level and angular momentum, exists more frequently in certain regions around the nucleus than others; this pattern is referred to as its atomic orbital. The orbitals come in a variety of shapes—sphere, dumbbell, torus, etc.—with the nucleus in the middle. The shapes of atomic orbitals are found by solving the Schrödinger equation. Analytic solutions of the Schrödinger equation are known for very few relatively simple model Hamiltonians including the hydrogen atom and the hydrogen molecular ion. Beginning with the helium atom—which contains just two electrons—numerical methods are used to solve the Schrödinger equation. Qualitatively the shape of the atomic orbitals of multi-electron atoms resemble the states of the hydrogen atom. The Pauli principle requires the distribution of these electrons within the atomic orbitals such that no more than two electrons are assigned to any one orbital; this requirement profoundly affects the atomic properties and ultimately the bonding of atoms into molecules. See also Spectroscopy Atom History of molecular theory Timeline of chemical element discoveries Introduction to quantum mechanics Kinetic theory of gases Atomism The Physical Principles of the Quantum Theory Footnotes Bibliography Further reading Charles Adolphe Wurtz (1881) The Atomic Theory, D. Appleton and Company, New York. Alan J. Rocke (1984) Chemical Atomism in the Nineteenth Century: From Dalton to Cannizzaro, Ohio State University Press, Columbus (open access full text at http://digital.case.edu/islandora/object/ksl%3Ax633gj985). External links Atomism by S. Mark Cohen. Atomic Theory – detailed information on atomic theory with respect to electrons and electricity. The Feynman Lectures on Physics Vol. I Ch. 1: Atoms in Motion Amount of substance Chemistry theories Foundational quantum physics Statistical mechanics
History of atomic theory
[ "Physics", "Chemistry", "Mathematics" ]
8,141
[ "Scalar physical quantities", "Physical quantities", "Quantity", "Foundational quantum physics", "Statistical mechanics", "Quantum mechanics", "Chemical quantities", "Amount of substance", "Atomic physics", " molecular", "nan", "Atomic", "Wikipedia categories named after physical quantities"...
2,847
https://en.wikipedia.org/wiki/Aung%20San%20Suu%20Kyi
Aung San Suu Kyi (born 19 June 1945), sometimes abbreviated to Suu Kyi, is a Burmese politician who served as State Counsellor of Myanmar and Minister of Foreign Affairs from 2016 to 2021. She has served as the general secretary of the National League for Democracy (NLD) since the party's founding in 1988 and was registered as its chairperson while it was a legal party from 2011 to 2023. She played a vital role in Myanmar's transition from military junta to partial democracy in the 2010s. The youngest daughter of Aung San, Father of the Nation of modern-day Myanmar, and Khin Kyi, Aung San Suu Kyi was born in Rangoon, British Burma. After graduating from the University of Delhi in 1964 and St Hugh's College, Oxford in 1968, she worked at the United Nations for three years. She married Michael Aris in 1972, with whom she had two children. Aung San Suu Kyi rose to prominence in the 8888 Uprising of 8 August 1988 and became the General Secretary of the NLD, which she had newly formed with the help of several retired army officials who criticised the military junta. In the 1990 general election, NLD won 81% of the seats in Parliament, but the results were nullified, as the State Peace and Development Council (SPDC), the military government, refused to hand over power, resulting in an international outcry. She had been detained before the elections and remained under house arrest for almost 15 of the 21 years from 1989 to 2010, becoming one of the world's most prominent political prisoners. In 1999, Time magazine named her one of the "Children of Gandhi" and his spiritual heir to nonviolence. She survived an assassination attempt in the 2003 Depayin massacre when at least 70 people associated with the NLD were killed. Her party boycotted the 2010 general election, resulting in a decisive victory for the military-backed Union Solidarity and Development Party (USDP). Aung San Suu Kyi became a Pyithu Hluttaw MP while her party won 43 of the 45 vacant seats in the 2012 by-elections. In the 2015 general election, her party won a landslide victory, taking 86% of the seats in the Pyidaungsu Hluttaw—well more than the 67% supermajority needed to ensure that its preferred candidates were elected president and vice president in the Presidential Electoral College. Although she was prohibited from becoming the president due to a clause in the Myanmar constitution—her late husband and children are foreign citizens—she assumed the newly created role of State Counsellor of Myanmar, a role akin to a prime minister or a head of government. When she ascended to the office of state counsellor, Aung San Suu Kyi drew criticism from several countries, organisations and figures over Myanmar's inaction in response to the genocide of the Rohingya people in Rakhine State and refusal to acknowledge that the Myanmar's military had committed massacres. Under her leadership, Myanmar also drew criticism for prosecutions of journalists. In 2019, Aung San Suu Kyi appeared in the International Court of Justice where she defended the Myanmar military against allegations of genocide against the Rohingya. Aung San Suu Kyi, whose party had won the November 2020 Myanmar general election, was arrested on 1 February 2021 following a coup d'état that returned the Tatmadaw to power and sparked protests across the country. Several charges were filed against her, and on 6 December 2021, she was sentenced to four years in prison on two of them. Later, on 10 January 2022, she was sentenced to an additional four years on another set of charges. On 12 October 2022, she was convicted of two further charges of corruption and she was sentenced to two terms of three years' imprisonment to be served concurrent to each other. On 30 December 2022, her trials ended with another conviction and an additional sentence of seven years' imprisonment for corruption. Aung San Suu Kyi's final sentence was of 33 years in prison, later reduced to 27 years. The United Nations, most European countries, and the United States condemned the arrests, trials, and sentences as politically motivated. Name Aung San Suu Kyi, like other Burmese names, includes no surname, but is only a personal name, in her case derived from three relatives: "Aung San" from her father, "Suu" from her paternal grandmother, and "Kyi" from her mother Khin Kyi. In Myanmar, Aung San Suu Kyi is often referred to as Daw Aung San Suu Kyi. Daw, literally meaning "aunt", is not part of her name but is an honorific for any older and revered woman, akin to "Madam". She is sometimes addressed as Daw Suu or Amay Suu ("Mother Suu") by her supporters. Personal life Aung San Suu Kyi was born on 19 June 1945 in Rangoon (now Yangon), British Burma. According to Peter Popham, she was born in a small village outside Rangoon called Hmway Saung. Her father, Aung San, allied with the Japanese during World War II. Aung San founded the modern Burmese army and negotiated Burma's independence from the United Kingdom in 1947; he was assassinated by his rivals in the same year. She is a niece of Thakin Than Tun who was the husband of Khin Khin Gyi, the elder sister of her mother Khin Kyi. She grew up with her mother, Khin Kyi, and two brothers, Aung San Lin and Aung San Oo, in Rangoon. Aung San Lin died at the age of eight when he drowned in an ornamental lake on the grounds of the house. Her elder brother emigrated to San Diego, California, becoming a United States citizen. After Aung San Lin's death, the family moved to a house by Inya Lake where Aung San Suu Kyi met people of various backgrounds, political views, and religions. She was educated in Methodist English High School (now Basic Education High School No. 1 Dagon) for much of her childhood in Burma, where she was noted as having a talent for learning languages. She speaks four languages: Burmese, English (with a British accent), French, and Japanese. She is a Theravada Buddhist. Aung San Suu Kyi's mother, Khin Kyi, gained prominence as a political figure in the newly formed Burmese government. She was appointed Burmese ambassador to India and Nepal in 1960, and Aung San Suu Kyi followed her there. She studied in the Convent of Jesus and Mary School in New Delhi, and graduated from Lady Shri Ram College, a constituent college of the University of Delhi in New Delhi, with a degree in politics in 1964. Suu Kyi continued her education at St Hugh's College, Oxford, obtaining a B.A. degree in Philosophy, Politics and Economics in 1967, graduating with a third-class degree that was promoted per tradition to an MA in 1968. After graduating, she lived in New York City with family friend Ma Than E, who was once a popular Burmese pop singer. She worked at the United Nations for three years, primarily on budget matters, writing daily to her future husband, Dr. Michael Aris. On 1 January 1972, Aung San Suu Kyi and Aris, a scholar of Tibetan culture and literature, living abroad in Bhutan, were married. The following year, she gave birth to their first son, Alexander Aris, in London; their second son, Kim Aris, was born in 1977. Between 1985 and 1987, Aung San Suu Kyi was working toward a Master of Philosophy degree in Burmese literature as a research student at the School of Oriental and African Studies (SOAS), University of London. She was elected as an Honorary Fellow of St Hugh's in 1990. For two years, she was a Fellow at the Indian Institute of Advanced Studies (IIAS) in Shimla, India. She also worked for the government of the Union of Burma. In 1988, Aung San Suu Kyi returned to Burma to tend for her ailing mother. Aris' visit in Christmas 1995 was the last time that he and Aung San Suu Kyi met, as she remained in Burma and the Burmese dictatorship denied him any further entry visas. Aris was diagnosed with prostate cancer in 1997 which was later found to be terminal. Despite appeals from prominent figures and organisations, including the United States, UN Secretary-General Kofi Annan and Pope John Paul II, the Burmese government would not grant Aris a visa, saying that they did not have the facilities to care for him, and instead urged Aung San Suu Kyi to leave the country to visit him. She was at that time temporarily free from house arrest but was unwilling to depart, fearing that she would be refused re-entry if she left, as she did not trust the military junta's assurance that she could return. Aris died on his 53rd birthday on 27 March 1999. Since 1989, when his wife was first placed under house arrest, he had seen her only five times, the last of which was for Christmas in 1995. She was also separated from her children, who live in the United Kingdom, until 2011. On 2 May 2008, after Cyclone Nargis hit Burma, Aung San Suu Kyi's dilapidated lakeside bungalow lost its roof and electricity, while the cyclone also left entire villages in the Irrawaddy delta submerged. Plans to renovate and repair the house were announced in August 2009. Aung San Suu Kyi was released from house arrest on 13 November 2010. Political career Political beginning Coincidentally, when Aung San Suu Kyi returned to Burma in 1988, the long-time military leader of Burma and head of the ruling party, General Ne Win, stepped down. Mass demonstrations for democracy followed that event on 8 August 1988 (8-8-88, a day seen as auspicious), which were violently suppressed in what came to be known as the 8888 Uprising. On 24 August 1988, she made her first public appearance at the Yangon General Hospital, addressing protestors from a podium. On 26 August, she addressed half a million people at a mass rally in front of the Shwedagon Pagoda in the capital, calling for a democratic government. However, in September 1988, a new military junta took power. Influenced by both Mahatma Gandhi's philosophy of non-violence and also by the Buddhist concepts, Aung San Suu Kyi entered politics to work for democratisation, helped found the National League for Democracy on 27 September 1988, but was put under house arrest on 20 July 1989. She was offered freedom if she left the country, but she refused. Despite her philosophy of non-violence, a group of ex-military commanders and senior politicians who joined NLD during the crisis believed that she was too confrontational and left NLD. However, she retained enormous popularity and support among NLD youths with whom she spent most of her time. During the crisis, the previous democratically elected Prime Minister of Burma, U Nu, initiated to form an interim government and invited opposition leaders to join him. Indian Prime Minister Rajiv Gandhi had signaled his readiness to recognize the interim government. However, Aung San Suu Kyi categorically rejected U Nu's plan by saying "the future of the opposition would be decided by masses of the people". Ex-Brigadier General Aung Gyi, another influential politician at the time of the 8888 crisis and the first chairman in the history of the NLD, followed the suit and rejected the plan after Aung San Suu Kyi's refusal. Aung Gyi later accused several NLD members of being communists and resigned from the party. 1990 general election and Nobel Peace Prize In 1990, the military junta called a general election, in which the National League for Democracy (NLD) received 59% of the votes, guaranteeing NLD 80% of the parliament seats. Some claim that Aung San Suu Kyi would have assumed the office of Prime Minister. Instead, the results were nullified and the military refused to hand over power, resulting in an international outcry. Aung San Suu Kyi was placed under house arrest at her home on University Avenue () in Rangoon, during which time she was awarded the Sakharov Prize for Freedom of Thought in 1990, and the Nobel Peace Prize one year later. Her sons Alexander and Kim accepted the Nobel Peace Prize on her behalf. Aung San Suu Kyi used the Nobel Peace Prize's US$1.3 million prize money to establish a health and education trust for the Burmese people. Around this time, Aung San Suu Kyi chose nonviolence as an expedient political tactic, stating in 2007, "I do not hold to nonviolence for moral reasons, but for political and practical reasons." The decision of the Nobel Committee mentions: In 1995 Aung San Suu Kyi delivered the keynote address at the Fourth World Conference on Women in Beijing. 1996 attack On 9 November 1996, the motorcade that Aung San Suu Kyi was traveling in with other National League for Democracy leaders Tin Oo and Kyi Maung, was attacked in Yangon. About 200 men swooped down on the motorcade, wielding metal chains, metal batons, stones and other weapons. The car that Aung San Suu Kyi was in had its rear window smashed, and the car with Tin Oo and Kyi Maung had its rear window and two backdoor windows shattered. It is believed the offenders were members of the Union Solidarity and Development Association (USDA) who were allegedly paid Ks.500/- (@ USD $0.50) each to participate. The NLD lodged an official complaint with the police, and according to reports the government launched an investigation, but no action was taken. (Amnesty International 120297) House arrest Aung San Suu Kyi was placed under house arrest for a total of 15 years over a 21-year period, on numerous occasions, since she began her political career, during which time she was prevented from meeting her party supporters and international visitors. In an interview, she said that while under house arrest she spent her time reading philosophy, politics and biographies that her husband had sent her. She also passed the time playing the piano and was occasionally allowed visits from foreign diplomats as well as from her personal physician. Although under house arrest, Aung San Suu Kyi was granted permission to leave Burma under the condition that she never return, which she refused: "As a mother, the greater sacrifice was giving up my sons, but I was always aware of the fact that others had given up more than me. I never forget that my colleagues who are in prison suffer not only physically, but mentally for their families who have no security outside—in the larger prison of Burma under authoritarian rule." The media were also prevented from visiting Aung San Suu Kyi, as occurred in 1998 when journalist Maurizio Giuliano, after photographing her, was stopped by customs officials who then confiscated all his films, tapes and some notes. In contrast, Aung San Suu Kyi did have visits from government representatives, such as during her autumn 1994 house arrest when she met the leader of Burma, Senior General Than Shwe and General Khin Nyunt on 20 September in the first meeting since she had been placed in detention. On several occasions during her house arrest, she had periods of poor health and as a result was hospitalised. The Burmese government detained and kept Aung San Suu Kyi imprisoned because it viewed her as someone "likely to undermine the community peace and stability" of the country, and used both Article 10(a) and 10(b) of the 1975 State Protection Act (granting the government the power to imprison people for up to five years without a trial), and Section 22 of the "Law to Safeguard the State Against the Dangers of Those Desiring to Cause Subversive Acts" as legal tools against her. She continuously appealed her detention, and many nations and figures continued to call for her release and that of 2,100 other political prisoners in the country. On 12 November 2010, days after the junta-backed Union Solidarity and Development Party (USDP) won elections conducted after a gap of 20 years, the junta finally agreed to sign orders allowing Aung San Suu Kyi's release, and her house arrest term came to an end on 13 November 2010. United Nations involvement The United Nations (UN) has attempted to facilitate dialogue between the junta and Aung San Suu Kyi. On 6 May 2002, following secret confidence-building negotiations led by the UN, the government released her; a government spokesman said that she was free to move "because we are confident that we can trust each other". Aung San Suu Kyi proclaimed "a new dawn for the country". However, on 30 May 2003 in an incident similar to the 1996 attack on her, a government-sponsored mob attacked her caravan in the northern village of Depayin, murdering and wounding many of her supporters. Aung San Suu Kyi fled the scene with the help of her driver, Kyaw Soe Lin, but was arrested upon reaching Ye-U. The government imprisoned her at Insein Prison in Rangoon. After she underwent a hysterectomy in September 2003, the government again placed her under house arrest in Rangoon. The results from the UN facilitation have been mixed; Razali Ismail, UN special envoy to Burma, met with Aung San Suu Kyi. Ismail resigned from his post the following year, partly because he was denied re-entry to Burma on several occasions. Several years later in 2006, Ibrahim Gambari, UN Undersecretary-General (USG) of Department of Political Affairs, met with Aung San Suu Kyi, the first visit by a foreign official since 2004. He also met with her later the same year. On 2 October 2007 Gambari returned to talk to her again after seeing Than Shwe and other members of the senior leadership in Naypyidaw. State television broadcast Aung San Suu Kyi with Gambari, stating that they had met twice. This was Aung San Suu Kyi's first appearance in state media in the four years since her current detention began. The United Nations Working Group for Arbitrary Detention published an Opinion that Aung San Suu Kyi's deprivation of liberty was arbitrary and in contravention of Article 9 of the Universal Declaration of Human Rights 1948, and requested that the authorities in Burma set her free, but the authorities ignored the request at that time. The U.N. report said that according to the Burmese Government's reply, "Daw Aung San Suu Kyi has not been arrested, but has only been taken into protective custody, for her own safety", and while "it could have instituted legal action against her under the country's domestic legislation ... it has preferred to adopt a magnanimous attitude, and is providing her with protection in her own interests". Such claims were rejected by Brigadier-General Khin Yi, Chief of Myanmar Police Force (MPF). On 18 January 2007, the state-run paper New Light of Myanmar accused Aung San Suu Kyi of tax evasion for spending her Nobel Prize money outside the country. The accusation followed the defeat of a US-sponsored United Nations Security Council resolution condemning Burma as a threat to international security; the resolution was defeated because of strong opposition from China, which has strong ties with the military junta (China later voted against the resolution, along with Russia and South Africa). In November 2007, it was reported that Aung San Suu Kyi would meet her political allies National League for Democracy along with a government minister. The ruling junta made the official announcement on state TV and radio just hours after UN special envoy Ibrahim Gambari ended his second visit to Burma. The NLD confirmed that it had received the invitation to hold talks with Aung San Suu Kyi. However, the process delivered few concrete results. On 3 July 2009, UN Secretary-General Ban Ki-moon went to Burma to pressure the junta into releasing Aung San Suu Kyi and to institute democratic reform. However, on departing from Burma, Ban Ki-moon said he was "disappointed" with the visit after junta leader Than Shwe refused permission for him to visit Aung San Suu Kyi, citing her ongoing trial. Ban said he was "deeply disappointed that they have missed a very important opportunity". Periods under detention 20 July 1989: Placed under house arrest in Rangoon under martial law that allows for detention without charge or trial for three years. 10 July 1995: Released from house arrest. 23 September 2000: Placed under house arrest. 6 May 2002: Released after 19 months. 30 May 2003: Arrested following the Depayin massacre, she was held in secret detention for more than three months before being returned to house arrest. 25 May 2007: House arrest extended by one year despite a direct appeal from U.N. Secretary-General Kofi Annan to General Than Shwe. 24 October 2007: Reached 12 years under house arrest, solidarity protests held at 12 cities around the world. 27 May 2008: House arrest extended for another year, which is illegal under both international law and Burma's own law. 11 August 2009: House arrest extended for 18 more months because of "violation" arising from the May 2009 trespass incident. 13 November 2010: Released from house arrest. 2007 anti-government protests Protests led by Buddhist monks during Saffron Revolution began on 19 August 2007 following steep fuel price increases, and continued each day, despite the threat of a crackdown by the military. On 22 September 2007, although still under house arrest, Aung San Suu Kyi made a brief public appearance at the gate of her residence in Yangon to accept the blessings of Buddhist monks who were marching in support of human rights. It was reported that she had been moved the following day to Insein Prison (where she had been detained in 2003), but meetings with UN envoy Ibrahim Gambari near her Rangoon home on 30 September and 2 October established that she remained under house arrest. 2009 trespass incident On 3 May 2009, an American man, identified as John Yettaw, swam across Inya Lake to her house uninvited and was arrested when he made his return trip three days later. He had attempted to make a similar trip two years earlier, but for unknown reasons was turned away. He later claimed at trial that he was motivated by a divine vision requiring him to notify her of an impending terrorist assassination attempt. On 13 May, Aung San Suu Kyi was arrested for violating the terms of her house arrest because the swimmer, who pleaded exhaustion, was allowed to stay in her house for two days before he attempted the swim back. Aung San Suu Kyi was later taken to Insein Prison, where she could have faced up to five years' confinement for the intrusion. The trial of Aung San Suu Kyi and her two maids began on 18 May and a small number of protesters gathered outside. Diplomats and journalists were barred from attending the trial; however, on one occasion, several diplomats from Russia, Thailand and Singapore and journalists were allowed to meet Aung San Suu Kyi. The prosecution had originally planned to call 22 witnesses. It also accused John Yettaw of embarrassing the country. During the ongoing defence case, Aung San Suu Kyi said she was innocent. The defence was allowed to call only one witness (out of four), while the prosecution was permitted to call 14 witnesses. The court rejected two character witnesses, NLD members Tin Oo and Win Tin, and permitted the defence to call only a legal expert. According to one unconfirmed report, the junta was planning to, once again, place her in detention, this time in a military base outside the city. In a separate trial, Yettaw said he swam to Aung San Suu Kyi's house to warn her that her life was "in danger". The national police chief later confirmed that Yettaw was the "main culprit" in the case filed against Aung San Suu Kyi. According to aides, Aung San Suu Kyi spent her 64th birthday in jail sharing biryani rice and chocolate cake with her guards. Her arrest and subsequent trial received worldwide condemnation by the UN Secretary General Ban Ki-moon, the United Nations Security Council, Western governments, South Africa, Japan and the Association of Southeast Asian Nations, of which Burma is a member. The Burmese government strongly condemned the statement, as it created an "unsound tradition" and criticised Thailand for meddling in its internal affairs. The Burmese Foreign Minister Nyan Win was quoted in the state-run newspaper New Light of Myanmar as saying that the incident "was trumped up to intensify international pressure on Burma by internal and external anti-government elements who do not wish to see the positive changes in those countries' policies toward Burma". Ban responded to an international campaign by flying to Burma to negotiate, but Than Shwe rejected all of his requests. On 11 August 2009, the trial concluded with Aung San Suu Kyi being sentenced to imprisonment for three years with hard labour. This sentence was commuted by the military rulers to further house arrest of 18 months. On 14 August, US Senator Jim Webb visited Burma, visiting with junta leader General Than Shwe and later with Aung San Suu Kyi. During the visit, Webb negotiated Yettaw's release and deportation from Burma. Following the verdict of the trial, lawyers of Aung San Suu Kyi said they would appeal against the 18-month sentence. On 18 August, United States President Barack Obama asked the country's military leadership to set free all political prisoners, including Aung San Suu Kyi. In her appeal, Aung San Suu Kyi had argued that the conviction was unwarranted. However, her appeal against the August sentence was rejected by a Burmese court on 2 October 2009. Although the court accepted the argument that the 1974 constitution, under which she had been charged, was null and void, it also said the provisions of the 1975 security law, under which she has been kept under house arrest, remained in force. The verdict effectively meant that she would be unable to participate in the elections scheduled to take place in 2010—the first in Burma in two decades. Her lawyer stated that her legal team would pursue a new appeal within 60 days. Late 2000s: International support for release Aung San Suu Kyi has received vocal support from Western nations in Europe, Australia and North and South America, as well as India, Israel, Japan the Philippines and South Korea. In December 2007, the US House of Representatives voted unanimously 400–0 to award Aung San Suu Kyi the Congressional Gold Medal; the Senate concurred on 25 April 2008. On 6 May 2008, President George W. Bush signed legislation awarding Aung San Suu Kyi the Congressional Gold Medal. She is the first recipient in American history to receive the prize while imprisoned. More recently, there has been growing criticism of her detention by Burma's neighbours in the Association of Southeast Asian Nations (ASEAN), particularly from Indonesia, Thailand, the Philippines and Singapore. At one point Malaysia warned Burma that it faced expulsion from ASEAN as a result of the detention of Aung San Suu Kyi. Other nations including South Africa, Bangladesh and the Maldives also called for her release. The United Nations has urged the country to move towards inclusive national reconciliation, the restoration of democracy, and full respect for human rights. In December 2008, the United Nations General Assembly passed a resolution condemning the human rights situation in Burma and calling for Aung San Suu Kyi's release—80 countries voting for the resolution, 25 against and 45 abstentions. Other nations, such as China and Russia, are less critical of the regime and prefer to cooperate only on economic matters. Indonesia has urged China to push Burma for reforms. However, Samak Sundaravej, former Prime Minister of Thailand, criticised the amount of support for Aung San Suu Kyi, saying that "Europe uses Aung San Suu Kyi as a tool. If it's not related to Aung San Suu Kyi, you can have deeper discussions with Myanmar." Vietnam, however, did not support calls by other ASEAN member states for Myanmar to free Aung San Suu Kyi, state media reported Friday, 14 August 2009. The state-run Việt Nam News said Vietnam had no criticism of Myanmar's decision 11 August 2009 to place Aung San Suu Kyi under house arrest for the next 18 months, effectively barring her from elections scheduled for 2010. "It is our view that the Aung San Suu Kyi trial is an internal affair of Myanmar", Vietnamese government spokesman Le Dung stated on the website of the Ministry of Foreign Affairs. In contrast with other ASEAN member states, Dung said Vietnam has always supported Myanmar and hopes it will continue to implement the "roadmap to democracy" outlined by its government. Nobel Peace Prize winners (Archbishop Desmond Tutu, the Dalai Lama, Shirin Ebadi, Adolfo Pérez Esquivel, Mairead Corrigan, Rigoberta Menchú, Prof. Elie Wiesel, US President Barack Obama, Betty Williams, Jody Williams and former US President Jimmy Carter) called for the rulers of Burma to release Aung San Suu Kyi to "create the necessary conditions for a genuine dialogue with Daw Aung San Suu Kyi and all concerned parties and ethnic groups to achieve an inclusive national reconciliation with the direct support of the United Nations". Some of the money she received as part of the award helped fund higher education grants to Burmese students through the London-based charity Prospect Burma. It was announced prior to the 2010 Burmese general election that Aung San Suu Kyi may be released "so she can organize her party", However, Aung San Suu Kyi was not allowed to run. On 1 October 2010 the government announced that she would be released on 13 November 2010. US President Barack Obama personally advocated the release of all political prisoners, especially Aung San Suu Kyi, during the US-ASEAN Summit of 2009. The US Government hoped that successful general elections would be an optimistic indicator of the Burmese government's sincerity towards eventual democracy. The Hatoyama government which spent 2.82 billion yen in 2008, has promised more Japanese foreign aid to encourage Burma to release Aung San Suu Kyi in time for the elections; and to continue moving towards democracy and the rule of law. In a personal letter to Aung San Suu Kyi, UK Prime Minister Gordon Brown cautioned the Burmese government of the potential consequences of rigging elections as "condemning Burma to more years of diplomatic isolation and economic stagnation". Aung San Suu Kyi met with many heads of state and opened a dialog with the Minister of Labor Aung Kyi (not to be confused with Aung San Suu Kyi). She was allowed to meet with senior members of her NLD party at the State House, however these meetings took place under close supervision. 2010 release On the evening of 13 November 2010, Aung San Suu Kyi was released from house arrest. This was the date her detention had been set to expire according to a court ruling in August 2009 and came six days after a widely criticised general election. She appeared in front of a crowd of her supporters, who rushed to her house in Rangoon when nearby barricades were removed by the security forces. Aung San Suu Kyi had been detained for 15 of the past 21 years. The government newspaper New Light of Myanmar reported the release positively, saying she had been granted a pardon after serving her sentence "in good conduct". The New York Times suggested that the military government may have released Aung San Suu Kyi because it felt it was in a confident position to control her supporters after the election. Her son Kim Aris was granted a visa in November 2010 to see his mother shortly after her release, for the first time in 10 years. He visited again on 5 July 2011, to accompany her on a trip to Bagan, her first trip outside Yangon since 2003. Her son visited again on 8 August 2011, to accompany her on a trip to Pegu, her second trip. Discussions were held between Aung San Suu Kyi and the Burmese government during 2011, which led to a number of official gestures to meet her demands. In October, around a tenth of Burma's political prisoners were freed in an amnesty and trade unions were legalised. In November 2011, following a meeting of its leaders, the NLD announced its intention to re-register as a political party to contend 48 by-elections necessitated by the promotion of parliamentarians to ministerial rank. Following the decision, Aung San Suu Kyi held a telephone conference with US President Barack Obama, in which it was agreed that Secretary of State Hillary Clinton would make a visit to Burma, a move received with caution by Burma's ally China. On 1 December 2011, Aung San Suu Kyi met with Hillary Clinton at the residence of the top-ranking US diplomat in Yangon. On 21 December 2011, Thai Prime Minister Yingluck Shinawatra met Aung San Suu Kyi in Yangon, marking Aung San Suu Kyi's "first-ever meeting with the leader of a foreign country". On 5 January 2012, British Foreign Minister William Hague met Aung San Suu Kyi and his Burmese counterpart. This represented a significant visit for Aung San Suu Kyi and Burma. Aung San Suu Kyi studied in the UK and maintains many ties there, whilst Britain is Burma's largest bilateral donor. During Aung San Suu Kyi's visit to Europe, she visited the Swiss parliament, collected her 1991 Nobel Prize in Oslo and her honorary degree from the University of Oxford. 2012 by-elections In December 2011, there was speculation that Aung San Suu Kyi would run in the 2012 national by-elections to fill vacant seats. On 18 January 2012, Aung San Suu Kyi formally registered to contest a Pyithu Hluttaw (lower house) seat in the Kawhmu Township constituency in special parliamentary elections to be held on 1 April 2012. The seat was previously held by Soe Tint, who vacated it after being appointed Construction Deputy Minister, in the 2010 election. She ran against Union Solidarity and Development Party candidate Soe Min, a retired army physician and native of Twante Township. On 3 March 2012, at a large campaign rally in Mandalay, Aung San Suu Kyi unexpectedly left after 15 minutes, because of exhaustion and airsickness. In an official campaign speech broadcast on Burmese state television's MRTV on 14 March 2012, Aung San Suu Kyi publicly campaigned for reform of the 2008 Constitution, removal of restrictive laws, more adequate protections for people's democratic rights, and establishment of an independent judiciary. The speech was leaked online a day before it was broadcast. A paragraph in the speech, focusing on the Tatmadaw's repression by means of law, was censored by authorities. Aung San Suu Kyi also called for international media to monitor the by-elections, while publicly pointing out irregularities in official voter lists, which include deceased individuals and exclude other eligible voters in the contested constituencies. On 21 March 2012, Aung San Suu Kyi was quoted as saying "Fraud and rule violations are continuing and we can even say they are increasing." When asked whether she would assume a ministerial post if given the opportunity, she said the following: On 26 March 2012, Aung San Suu Kyi suspended her nationwide campaign tour early, after a campaign rally in Myeik (Mergui), a coastal town in the south, citing health problems due to exhaustion and hot weather. On 1 April 2012, the NLD announced that Aung San Suu Kyi had won the vote for a seat in Parliament. A news broadcast on state-run MRTV, reading the announcements of the Union Election Commission, confirmed her victory, as well as her party's victory in 43 of the 45 contested seats, officially making Aung San Suu Kyi the Leader of the Opposition in the Pyidaungsu Hluttaw. Although she and other MP-elects were expected to take office on 23 April when the Hluttaws resumed session, National League for Democracy MP-elects, including Aung San Suu Kyi, said they might not take their oaths because of its wording; in its present form, parliamentarians must vow to "safeguard" the constitution. In an address on Radio Free Asia, she said "We don't mean we will not attend the parliament, we mean we will attend only after taking the oath ... Changing that wording in the oath is also in conformity with the Constitution. I don't expect there will be any difficulty in doing it." On 2 May 2012, National League for Democracy MP-elects, including Aung San Suu Kyi, took their oaths and took office, though the wording of the oath was not changed. According to the Los Angeles Times, "Suu Kyi and her colleagues decided they could do more by joining as lawmakers than maintaining their boycott on principle." On 9 July 2012, she attended the Parliament for the first time as a lawmaker. 2015 general election On 16 June 2012, Aung San Suu Kyi was finally able to deliver her Nobel acceptance speech (Nobel lecture) at Oslo's City Hall, two decades after being awarded the peace prize. In September 2012, Aung San Suu Kyi received in person the United States Congressional Gold Medal, which is the highest Congressional award. Although she was awarded this medal in 2008, at the time she was under house arrest, and was unable to receive the medal. Aung San Suu Kyi was greeted with bipartisan support at Congress, as part of a coast-to-coast tour in the United States. In addition, Aung San Suu Kyi met President Barack Obama at the White House. The experience was described by Aung San Suu Kyi as "one of the most moving days of my life". In 2014, she was listed as the 61st-most-powerful woman in the world by Forbes. On 6 July 2012, Aung San Suu Kyi announced on the World Economic Forum's website that she wanted to run for the presidency in Myanmar's 2015 elections. The current Constitution, which came into effect in 2008, bars her from the presidency because she is the widow and mother of foreigners—provisions that appeared to be written specifically to prevent her from being eligible. The NLD won a sweeping victory in those elections, winning at least 255 seats in the House of Representatives and 135 seats in the House of Nationalities. In addition, Aung San Suu Kyi won re-election to the House of Representatives. Under the 2008 constitution, the NLD needed to win at least a two-thirds majority in both houses to ensure that its candidate would become president. Before the elections, Aung San Suu Kyi announced that even though she is constitutionally barred from the presidency, she would hold the real power in any NLD-led government. On 30 March 2016 she became Minister for the President's Office, for Foreign Affairs, for Education and for Electric Power and Energy in President Htin Kyaw's government; later she relinquished the latter two ministries and President Htin Kyaw appointed her State Counsellor, a position akin to a Prime Minister created especially for her. The position of State Counsellor was approved by the House of Nationalities on 1 April 2016 and the House of Representatives on 5 April 2016. The next day, her role as State Counsellor was established. State counsellor and foreign minister (2016–2021) As soon as she became foreign minister, she invited Chinese Foreign Minister Wang Yi, Canadian Foreign Minister Stephane Dion and Italian Foreign Minister Paolo Gentiloni in April and Japanese Foreign Minister Fumio Kishida in May and discussed how to have good diplomatic relationships with these countries. Initially, upon accepting the State Counsellor position, she granted amnesty to the students who were arrested for opposing the National Education Bill, and announced the creation of the commission on Rakhine State, which had a long record of persecution of the Muslim Rohingya minority. However, soon Aung San Suu Kyi's government did not manage with the ethnic conflicts in Shan and Kachin states, where thousands of refugees fled to China, and by 2017 the persecution of the Rohingya by the government forces escalated to the point that it is not uncommonly called a genocide. Aung San Suu Kyi, when interviewed, has denied the allegations of ethnic cleansing. She has also refused to grant citizenship to the Rohingya, instead taking steps to issue ID cards for residency but no guarantees of citizenship. Her tenure as State Counsellor of Myanmar has drawn international criticism for her failure to address her country's economic and ethnic problems, particularly the plight of the Rohingya following the 25 August 2017 ARSA attacks (described as "certainly one of the biggest refugee crises and cases of ethnic cleansing since the Second World War"), for the weakening of freedom of the press and for her style of leadership, described as imperious and "distracted and out of touch". During the COVID-19 pandemic in Myanmar, Suu Kyi chaired a National Central Committee responsible for coordinating the country's pandemic response. Response to the genocide of Rohingya Muslims and refugees In 2017, critics called for Aung San Suu Kyi's Nobel prize to be revoked, citing her silence over the genocide of Rohingya people in Myanmar. Some activists criticised Aung San Suu Kyi for her silence on the 2012 Rakhine State riots (later repeated during the 2015 Rohingya refugee crisis), and her indifference to the plight of the Rohingya, Myanmar's persecuted Muslim minority. In 2012, she told reporters she did not know if the Rohingya could be regarded as Burmese citizens. In a 2013 interview with the BBC's Mishal Husain, Aung San Suu Kyi did not condemn violence against the Rohingya and denied that Muslims in Myanmar have been subject to ethnic cleansing, insisting that the tensions were due to a "climate of fear" caused by "a worldwide perception that global Muslim power is 'very great. She did condemn "hate of any kind" in the interview. According to Peter Popham, in the aftermath of the interview, she expressed anger at being interviewed by a Muslim. Husain had challenged Aung San Suu Kyi that almost all of the impact of violence was against the Rohingya, in response to Aung San Suu Kyi's claim that violence was happening on both sides, and Peter Popham described her position on the issue as one of purposeful ambiguity for political gain. However, she said that she wanted to work towards reconciliation and she cannot take sides as violence has been committed by both sides. According to The Economist, her "halo has even slipped among foreign human-rights lobbyists, disappointed at her failure to make a clear stand on behalf of the Rohingya minority". However, she has spoken out "against a ban on Rohingya families near the Bangladeshi border having more than two children". In a 2015 BBC News article, reporter Jonah Fisher suggested that Aung San Suu Kyi's silence over the Rohingya issue is due to a need to obtain support from the majority Bamar ethnicity as she is in "the middle of a general election campaign". In May 2015, the Dalai Lama publicly called upon her to do more to help the Rohingya in Myanmar, claiming that he had previously urged her to address the plight of the Rohingya in private during two separate meetings and that she had resisted his urging. In May 2016, Aung San Suu Kyi asked the newly appointed United States Ambassador to Myanmar, Scot Marciel, not to refer to the Rohingya by that name as they "are not recognized as among the 135 official ethnic groups" in Myanmar. This followed Bamar protests at Marciel's use of the word "Rohingya". In 2016, Aung San Suu Kyi was accused of failing to protect Myanmar's Rohingya Muslims during the Rohingya genocide. State crime experts from Queen Mary University of London warned that Aung San Suu Kyi is "legitimising genocide" in Myanmar. Despite continued persecution of the Rohingya well into 2017, Aung San Suu Kyi was "not even admitting, let alone trying to stop, the army's well-documented campaign of rape, murder and destruction against Rohingya villages". On 4 September 2017, Yanghee Lee, the UN's special rapporteur on human rights in Myanmar, criticised Aung San Suu Kyi's response to the "really grave" situation in Rakhine, saying: "The de facto leader needs to step in—that is what we would expect from any government, to protect everybody within their own jurisdiction." The BBC reported that "Her comments came as the number of Rohingya fleeing to Bangladesh reached 87,000, according to UN estimates", adding that "her sentiments were echoed by Nobel Peace laureate Malala Yousafzai, who said she was waiting to hear from Ms Suu Kyi—who has not commented on the crisis since it erupted". The next day George Monbiot, writing in The Guardian, called on readers to sign a change.org petition to have the Nobel peace prize revoked, criticising her silence on the matter and asserting "whether out of prejudice or out of fear, she denies to others the freedoms she rightly claimed for herself. Her regime excludes—and in some cases seeks to silence—the very activists who helped to ensure her own rights were recognised." The Nobel Foundation replied that there existed no provision for revoking a Nobel Prize. Archbishop Desmond Tutu, a fellow peace prize holder, also criticised Aung San Suu Kyi's silence: in an open letter published on social media, he said: "If the political price of your ascension to the highest office in Myanmar is your silence, the price is surely too steep ... It is incongruous for a symbol of righteousness to lead such a country." On 13 September it was revealed that Aung San Suu Kyi would not be attending a UN General Assembly debate being held the following week to discuss the humanitarian crisis, with a Myanmar's government spokesman stating "perhaps she has more pressing matters to deal with". In October 2017, Oxford City Council announced that, following a unanimous cross-party vote, the honour of Freedom of the City, granted in 1997 in recognition of her "long struggle for democracy", was to be withdrawn following evidence emerging from the United Nations which meant that she was "no longer worthy of the honour". A few days later, Munsur Ali, a councillor for City of London Corporation, tabled a motion to rescind the Freedom of the City of London: the motion was supported by Catherine McGuinness, chair of the corporation's policy and resources committee, who expressed "distress ... at the situation in Burma and the atrocities committed by the Burmese military". On 13 November 2017, Bob Geldof returned his Freedom of the City of Dublin award in protest over Aung San Suu Kyi also holding the accolade, stating that he does not "wish to be associated in any way with an individual currently engaged in the mass ethnic cleansing of the Rohingya people of north-west Burma". Calling Aung San Suu Kyi a "handmaiden to genocide", Geldof added that he would take pride in his award being restored if it is first stripped from her. The Dublin City Council voted 59–2 (with one abstention) to revoke Aung San Suu Kyi's Freedom of the City award over Myanmar's treatment of the Rohingya people in December 2017, though Lord Mayor of Dublin Mícheál Mac Donncha denied the decision was influenced by protests by Geldof and members of U2. At the same meeting, the Councillors voted 37–7 (with 5 abstentions) to remove Geldof's name from the Roll of Honorary Freemen. In March 2018, the United States Holocaust Memorial Museum revoked Aung San Suu Kyi's Elie Wiesel Award, awarded in 2012, citing her failure "to condemn and stop the military's brutal campaign" against Rohingya Muslims. In May 2018, Aung San Suu Kyi was considered complicit in the crimes against Rohingyas in a report by Britain's International Development Committee. In August 2018, it was revealed that Aung San Suu Kyi would be stripped of her Freedom of Edinburgh award over her refusal to speak out against the crimes committed against the Rohingya. She had received the award in 2005 for promoting peace and democracy in Burma. This will be only the second time that anyone has ever been stripped of the award, after Charles Stewart Parnell lost it in 1890 due to a salacious affair. Also in August, a UN report, while describing the violence as genocide, added that Aung San Suu Kyi did as little as possible to prevent it. In early October 2018, both the Canadian Senate and its House of Commons voted unanimously to strip Aung San Suu Kyi of her honorary citizenship. This decision was caused by the Government of Canada's determination that the treatment of the Rohingya by Myanmar's government amounts to genocide. On 11 November 2018, Amnesty International announced it was revoking her Ambassador of Conscience award. In December 2019, Aung San Suu Kyi appeared in the International Court of Justice at The Hague where she defended the Burmese military against allegations of genocide against the Rohingya. In a speech of over 3,000 words, Aung San Suu Kyi did not use the term "Rohingya" in describing the ethnic group. She stated that the allegations of genocide were "incomplete and misleading", claiming that the situation was actually a Burmese military response to attacks by the Arakan Rohingya Salvation Army (ARSA). She also questioned how there could be "genocidal intent" when the Burmese government had opened investigations and also encouraged Rohingya to return after being displaced. However, experts have largely criticised the Burmese investigations as insincere, with the military declaring itself innocent and the government preventing a visit from investigators from the United Nations. Many Rohingya have also not returned due to perceiving danger and a lack of rights in Myanmar. In January 2020, the International Court of Justice decided that there was a "real and imminent risk of irreparable prejudice to the rights" of the Rohingya. The court also took the view that the Burmese government's efforts to remedy the situation "do not appear sufficient" to protect the Rohingya. Therefore, the court ordered the Burmese government to take "all measures within its power" to protect the Rohingya from genocidal actions. The court also instructed the Burmese government to preserve evidence and report back to the court at timely intervals about the situation. Arrests and prosecution of journalists In December 2017, two Reuters journalists, Wa Lone and Kyaw Soe Oo, were arrested while investigating the Inn Din massacre of Rohingyas. Suu Kyi publicly commented in June 2018 that the journalists "weren't arrested for covering the Rakhine issue", but because they had broken Myanmar's Official Secrets Act. As the journalists were then on trial for violating the Official Secrets Act, Aung San Suu Kyi's presumption of their guilt was criticised by rights groups for potentially influencing the verdict. American diplomat Bill Richardson said that he had privately discussed the arrest with Suu Kyi, and that Aung San Suu Kyi reacted angrily and labelled the journalists "traitors". A police officer testified that he was ordered by superiors to use entrapment to frame and arrest the journalists; he was later jailed and his family evicted from their home in the police camp. The judge found the journalists guilty in September 2018 and to be jailed for seven years. Aung San Suu Kyi reacted to widespread international criticism of the verdict by stating: "I don't think anyone has bothered to read" the judgement as it had "nothing to do with freedom of expression at all", but the Official Secrets Act. She also challenged critics to "point out where there has been a miscarriage of justice", and told the two Reuters journalists that they could appeal their case to a higher court. In September 2018, the Office of the United Nations High Commissioner for Human Rights issued a report that since Aung San Suu Kyi's party, the NLD, came to power, the arrests and criminal prosecutions of journalists in Myanmar by the government and military, under laws which are too vague and broad, have "made it impossible for journalists to do their job without fear or favour." 2021 arrest and trial On 1 February 2021, Aung San Suu Kyi was arrested and deposed by the Myanmar's military, along with other leaders of her National League for Democracy (NLD) party, after Myanmar's military declared the November 2020 general election results fraudulent. A 1 February court order authorised her detainment for 15 days, stating that soldiers searching her Naypyidaw villa had uncovered imported communications equipment lacking proper paperwork. Aung San Suu Kyi was transferred to house arrest on the same evening, and on 3 February was formally charged with illegally importing ten or more walkie-talkies. She faces up to three years in prison for the charges. According to The New York Times, the charge "echoed previous accusations of esoteric legal crimes (and) arcane offenses" used by the military against critics and rivals. As of 9 February, Aung San Suu Kyi continues to be held incommunicado, without access to international observers or legal representation of her choice. US President Joe Biden raised the threat of new sanctions as a result of the Myanmar's military coup. In a statement, the UN Secretary-General António Guterres believes "These developments represent a serious blow to democratic reforms in Myanmar." Volkan Bozkır, President of the UN General Assembly, also voiced his concerns, having tweeted "Attempts to undermine democracy and rule of law are unacceptable", and called for the "immediate release" of the detained NLD party leaders. On 1 April 2021, Aung San Suu Kyi was charged with the fifth offence in relation to violating the official secrets act. According to her lawyer, it is the most serious charge brought against her after the coup and could carry a sentence of up to 14 years in prison if convicted. On 12 April 2021, Aung San Suu Kyi was hit with another charge, this time "under section 25 of the natural disaster management law". According to her lawyer, it is her sixth indictment. She appeared in court via video link and now faces five charges in the capital Naypyidaw and one in Yangon. On 28 April 2021, the National Unity Government (NUG), in which Aung San Suu Kyi symbolically retained her position, anticipated that there would be no talks with the junta until all political prisoners, including her, are set free. This move by her supporters come after an ASEAN-supported consensus with the junta leadership in the past days. However, on 8 May 2021, the junta designated NUG as a terrorist organisation and warned citizens not to cooperate, nor to give aid to the parallel government, stripping Aung San Suu Kyi of her symbolic position. On 10 May 2021, her lawyer said she would appear in court in person for the first time since her arrest after the Supreme Court ruled that she could attend in person and meet her lawyers. She had been previously only allowed to do so remotely from her home. On 21 May 2021, a military junta commission was formed to dissolve Aung San Suu Kyi's National League for Democracy (NLD) on grounds of election fraud in the November 2020 election. On 22 May 2021, during his first interview since the coup, junta leader Min Aung Hlaing reported that she was in good health at her home and that she would appear in court in a matter of days. On 23 May 2021, the European Union expressed support for Aung San Suu Kyi's party and condemned the commission aimed at dissolving the party, echoing the NLD's statement released earlier in the week. On 24 May 2021, Aung San Suu Kyi appeared in person in court for the first time since the coup to face the "incitement to sedition" charge against her. During the 30-minute hearing, she said that she was not fully aware of what was going on outside as she had no access to full information from the outside and refused to respond on the matters. She was also quoted on the possibility of her party's forced dissolution as "Our party grew out of the people so it will exist as long as people support it." In her meeting with her lawyers, Aung San Suu Kyi also wished people "good health". On 2 June 2021, it was reported that the military had moved her (as well as Win Myint) from their homes to an unknown location. On 10 June 2021, Aung San Suu Kyi was charged with corruption, the most serious charge brought against her, which carries a maximum penalty of 15 years' imprisonment. Aung San Suu Kyi's lawyers say the charges are made to keep her out of the public eye. On 14 June 2021, the trial against Aung San Suu Kyi began. Any conviction would prevent her from running for office again. Aung San Suu Kyi's lawyers attempted to have prosecution testimony against her on the sedition charge disqualified but the motion was denied by the judge. On 13 September 2021, court proceedings were to resume against her, but it was postponed due to Aung San Suu Kyi presenting "minor health issues" that impeded her from attending the court in person. On 4 October 2021, Aung San Suu Kyi asked the judge to reduce her times of court appearances because of her fragile health. Aung San Suu Kyi described her health as "strained". In November, the Myanmar courts deferred the first verdicts in the trial without further explanation or giving dates. In the same month, she was again charged with corruption, related to the purchase and rental of a helicopter, bringing the total of charges to nearly a dozen. On 6 December 2021, Suu Kyi was sentenced to 4 years in jail. Suu Kyi, who is still facing multiple charges and further sentences, was sentenced on the charge of inciting dissent and violating COVID-19 protocols. Following a partial pardon by the chief of the military government, Aung San Suu Kyi's four-year sentence was reduced to two years' imprisonment. On 10 January 2022, the military court in Myanmar sentenced Suu Kyi to an additional four years in prison on a number of charges including "importing and owning walkie-talkies" and "breaking coronavirus rules". The trials, which are closed to the public, the media, and any observers, were described as a "courtroom circus of secret proceedings on bogus charges" by the deputy director for Asia of Human Rights Watch. On 27 April 2022, Aung San Suu Kyi was sentenced to five years in jail on corruption charges. On 22 June 2022, junta authorities ordered that all further legal proceedings against Suu Kyi will take place in prison venues, instead of a courtroom. No explanation of the decision was given. Citing unidentified sources, the BBC reported that Suu Kyi was also moved on 22 June from house arrest, where she had had close companions, to solitary confinement in a specially-built area inside a prison in Naypyidaw. This is the same prison in which Win Myint had similarly been placed in solitary confinement. The military confirmed that Suu Kyi had been moved to prison. On 15 August 2022, sources following Aung San Suu Kyi's court proceedings said that she was sentenced to an additional six years' imprisonment after being found guilty on four corruption charges, bringing her overall sentences to 17 years in prison. In September 2022, she was convicted of election fraud and breaching the state's secrets act and sentenced to a total of six years in prison for both convictions, increasing her overall sentence to 23 years in prison. By 12 October 2022, she had been sentenced to 26 years imprisonment on ten charges in total, including five corruption charges. On 30 December 2022, her trials ended with another conviction and an additional sentence of seven years' imprisonment for corruption. Aung San Suu Kyi's final sentence is of 33 years in prison. On 12 July 2023, Thailand's foreign minister Don Pramudwinai said at the ASEAN Foreign Ministers' Meeting in Jakarta that he met with Aung San Suu Kyi during his visit to Myanmar. On 1 August 2023, the military junta granted Suu Kyi a partial pardon, reducing her sentence to a total of 27 years in prison. Prior to the pardon, she was moved from prison to a VIP government residence, according to an official from NLD party. However, it was reported that since the beginning of September 2023, she is back in prison. The exact time when she was sent back to prison is unknown. Since January, Aung San Suu Kyi and her lawyers are trying to get six corruption charges overturned. To this date, the requests are repeatedly denied. On 16 April 2024, the military announced that Aung San Suu Kyi had been transferred to house arrest due to a heat wave. However, pro-democracy publications such as The Irrawaddy claimed that she remains in prison, with air conditioners being added to her cell. Political beliefs Asked what democratic models Myanmar could look to, she said: "We have many, many lessons to learn from various places, not just the Asian countries like South Korea, Taiwan, Mongolia, and Indonesia." She also cited "Eastern Europe and countries, which made the transition from communist autocracy to democracy in the 1980s and 1990s, and the Latin American countries, which made the transition from military governments. And we cannot of course forget South Africa, because although it wasn't a military regime, it was certainly an authoritarian regime." She added: "We wish to learn from everybody who has achieved a transition to democracy, and also ... our great strong point is that, because we are so far behind everybody else, we can also learn which mistakes we should avoid." In a nod to the deep US political divide between Republicans led by Mitt Romney and the Democrats by Barack Obama—then battling to win the 2012 presidential election—she stressed, "Those of you who are familiar with American politics I'm sure understand the need for negotiated compromise." Related organisations Freedom Now, a US-based non-profit organisation, was retained in 2006 by a member of her family to help secure Aung San Suu Kyi's release from house arrest. The organisation secured several opinions from the UN Working Group on Arbitrary Detention that her detention was in violation of international law; engaged in political advocacy such as spearheading a letter from 112 former Presidents and Prime Ministers to UN Secretary-General Ban Ki-moon urging him to go to Burma to seek her release, which he did six weeks later; and published numerous op-eds and spoke widely to the media about her ongoing detention. Its representation of her ended when she was released from house arrest on 13 November 2010. Aung San Suu Kyi has been an honorary board member of International IDEA and ARTICLE 19 since her detention, and has received support from these organisations. The Vrije Universiteit Brussel and the University of Louvain (UCLouvain), both located in Belgium, granted her the title of Doctor Honoris Causa. In 2003, the Freedom Forum recognised Aung San Suu Kyi's efforts to promote democracy peacefully with the Al Neuharth Free Spirit of the Year Award, in which she was presented over satellite because she was under house arrest. She was awarded one million dollars. In June of each year, the U.S. Campaign for Burma organises hundreds of "Arrest Yourself" house parties around the world in support of Aung San Suu Kyi. At these parties, the organisers keep themselves under house arrest for 24 hours, invite their friends, and learn more about Burma and Aung San Suu Kyi. The Freedom Campaign, a joint effort between the Human Rights Action Center and US Campaign for Burma, looks to raise worldwide attention to the struggles of Aung San Suu Kyi and the people of Burma. The Burma Campaign UK is a UK-based NGO (Non-Governmental Organisation) that aims to raise awareness of Burma's struggles and follow the guidelines established by the NLD and Aung San Suu Kyi. St Hugh's College, Oxford, where she studied, had a Burmese theme for their annual ball in support of her in 2006. The university later awarded her an honorary doctorate in civil law on 20 June 2012 during her visit to her alma mater. Aung San Suu Kyi is the official patron of The Rafto Human Rights House in Bergen, Norway. She received the Thorolf Rafto Memorial Prize in 1990. She was made an honorary free person of the City of Dublin, Ireland in November 1999, although a space had been left on the roll of signatures to symbolize her continued detention. This was subsequently revoked on 13 December 2017. In November 2005 the human rights group Equality Now proposed Aung Sun Suu Kyi as a potential candidate, among other qualifying women, for the position of U.N. Secretary General. In the proposed list of qualified women Aung San Suu Kyi was recognised by Equality Now as the Prime Minister-Elect of Burma. The UN' special envoy to Myanmar, Ibrahim Gambari, met Aung San Suu Kyi on 10 March 2008 before wrapping up his trip to the military-ruled country. Aung San Suu Kyi was an honorary member of The Elders, a group of eminent global leaders brought together by Nelson Mandela. Her ongoing detention meant that she was unable to take an active role in the group, so The Elders placed an empty chair for her at their meetings. The Elders have consistently called for the release of all political prisoners in Burma. Upon her election to parliament, she stepped down from her post. In 2010, Aung San Suu Kyi was given an honorary doctorate from the University of Johannesburg. In 2011, Aung San Suu Kyi was named the Guest Director of the 45th Brighton Festival. She was part of the international jury of Human Rights Defenders and Personalities who helped to choose a universal Logo for Human Rights in 2011. In June 2011, the BBC announced that Aung San Suu Kyi was to deliver the 2011 Reith Lectures. The BBC covertly recorded two lectures with Aung San Suu Kyi in Burma, which were then smuggled out of the country and brought back to London. The lectures were broadcast on BBC Radio 4 and the BBC World Service on 28 June 2011 and 5 July 2011. 8 March 2012, Canadian Foreign Affairs Minister John Baird presented Aung San Suu Kyi a certificate of honorary Canadian citizenship and an informal invitation to visit Canada. The honorary citizenship was revoked in September 2018 due to the Rohingya conflict. In April 2012, British Prime Minister David Cameron became the first leader of a major world power to visit Aung San Suu Kyi and the first British prime minister to visit Burma since the 1950s. In his visit, Cameron invited Aung San Suu Kyi to Britain where she would be able to visit her 'beloved' Oxford, an invitation which she later accepted. She visited Britain on 19 June 2012. In 2012 she received the Honorary degree of Doctor of Civil Law from the University of Oxford. In May 2012, Aung San Suu Kyi received the inaugural Václav Havel Prize for Creative Dissent of the Human Rights Foundation. 29 May 2012 PM Manmohan Singh of India visited Aung San Suu Kyi. In his visit, PM invited Aung San Suu Kyi to India as well. She started her six-day visit to India on 16 November 2012, where among the places she visited was her alma mater Lady Shri Ram College in New Delhi. In 2012, Aung San Suu Kyi set up the charity Daw Khin Kyi Foundation to improve health, education and living standards in underdeveloped parts of Myanmar. The charity was named after Aung San Suu Kyi's mother. Htin Kyaw played a leadership role in the charity before his election as President of Myanmar. The charity runs a Hospitality and Catering Training Academy in Kawhmu Township, in Yangon Region, and runs a mobile library service which in 2014 had 8000 members. Seoul National University in South Korea conferred an honorary doctorate degree to Aung San Suu Kyi in February 2013. University of Bologna, Italy conferred an honorary doctorate degree in philosophy to Aung San Suu Kyi in October 2013. Monash University, The Australian National University, University of Sydney and University of Technology, Sydney conferred an honorary degree to Aung San Suu Kyi in November 2013. In popular culture The life of Aung San Suu Kyi and her husband Michael Aris is portrayed in Luc Besson's 2011 film The Lady, in which they are played by Michelle Yeoh and David Thewlis. Yeoh visited Aung San Suu Kyi in 2011 before the film's release in November. In the John Boorman's 1995 film Beyond Rangoon, Aung San Suu Kyi was played by Adelle Lutz. Irish songwriters Damien Rice and Lisa Hannigan released in 2005 the single "Unplayed Piano", in support of the Free Aung San Suu Kyi 60th Birthday Campaign that was happening at the time. Irish rock band U2 wrote the song "Walk On" in tribute to Aung San Suu Kyi. It is the fourth track on their tenth studio album, All That You Can't Leave Behind (2000), and would later be issued as a single. Lead singer Bono is wearing a t-shirt with her image and name on the front in their official video of the song. "Walk On" won Record of the Year at the 2002 Grammy Awards, that also featured U2 performing the song. Bono publicised her plight during the U2 360° Tour, 2009–2011. Saxophonist Wayne Shorter composed a song titled "Aung San Suu Kyi". It appears on his albums 1+1 (with pianist Herbie Hancock) and Footprints Live!. Health problems Aung San Suu Kyi underwent surgery for a gynecological condition in September 2003 at Asia Royal Hospital during her house arrest. She also underwent minor foot surgery in December 2013 and eye surgery in April 2016. In June 2012, her doctor Tin Myo Win said that she had no serious health problems, but weighed only , had low blood pressure, and could become weak easily. After being arrested and detained on 1 February 2021, there were concerns that Aung San Suu Kyi's health was deteriorating. However, according to military's spokesperson Zaw Min Tun, special attention is given to her health and living condition. Don Pramudwinai also said that "she was in good health, both physically and mentally". Although a junta spokesperson claimed that she is in good health, since being sent back to prison in September 2023, it is reported that her health condition is worsening and she is "suffering of toothache and unable to eat". Her request to see a dentist had been denied. Her son is urging the junta to allow Aung San Suu Kyi to receive medical assistance. Books Freedom from Fear (1991) () Letters from Burma (1991) () Let's Visit Nepal (1985) () Honours List of awards and honours received by Aung San Suu Kyi See also List of civil rights leaders List of Nobel laureates affiliated with Kyoto University State Counsellor of Myanmar List of foreign ministers in 2017 List of current foreign ministers Notes References Bibliography Miller, J. E. (2001). Who's Who in Contemporary Women's Writing. Routledge. Reid, R., Grosberg, M. (2005). Myanmar (Burma). Lonely Planet. . Stewart, Whitney (1997). Aung San Suu Kyi: Fearless Voice of Burma. Twenty-First Century Books. . Further reading Combs, Daniel. Until the World Shatters: Truth, Lies, and the Looting of Myanmar (2021). Aung San Suu Kyi (Modern Peacemakers) (2007) by Judy L. Hasday, The Lady: Aung San Suu Kyi: Nobel Laureate and Burma's Prisoner (2002) by Barbara Victor, , or 1998 hardcover: The Lady and the Peacock: The Life of Aung San Suu Kyi (2012) by Peter Popham, Perfect Hostage: A Life of Aung San Suu Kyi (2007) by Justin Wintle, Tyrants: The World's 20 Worst Living Dictators (2006) by David Wallechinsky, Aung San Suu Kyi (Trailblazers of the Modern World) (2004) by William Thomas, No Logo: No Space, No Choice, No Jobs (2002) by Naomi Klein Mental culture in Burmese crisis politics: Aung San Suu Kyi and the National League for Democracy (ILCAA Study of Languages and Cultures of Asia and Africa Monograph Series) (1999) by Gustaaf Houtman, Aung San Suu Kyi: Standing Up for Democracy in Burma (Women Changing the World) (1998) by Bettina Ling Prisoner for Peace: Aung San Suu Kyi and Burma's Struggle for Democracy (Champions of Freedom Series) (1994) by John Parenteau, Des femmes prix Nobel de Marie Curie à Aung San Suu Kyi, 1903–1991 (1992) by Charlotte Kerner, Nicole Casanova, Gidske Anderson, Aung San Suu Kyi, towards a new freedom (1998) by Chin Geok Ang Aung San Suu Kyi's struggle: Its principles and strategy (1997) by Mikio Oishi Finding George Orwell in Burma (2004) by Emma Larkin Character Is Destiny: Inspiring Stories Every Young Person Should Know and Every Adult Should Remember (2005) by John McCain, Mark Salter. Random House Under the Dragon: A Journey Through Burma (1998/2010) by Rory MacLean External links Aung San Suu Kyi's website (Site appears to be inactive. Last posting was in July 2014) (withdrawn 2018) Prime ministers of Myanmar 21st-century women prime ministers 1945 births 20th-century Burmese women writers 20th-century Burmese writers 21st-century Burmese politicians 21st-century Burmese women politicians 21st-century Burmese women writers 21st-century Burmese writers Alumni of SOAS University of London Alumni of St Hugh's College, Oxford Amnesty International prisoners of conscience held by Myanmar Buddhist pacifists Burmese activists Burmese democracy activists Burmese human rights activists Burmese Nobel laureates Burmese pacifists Burmese prisoners and detainees Burmese revolutionaries Burmese socialists Burmese Theravada Buddhists Burmese women activists Burmese women diplomats Burmese women in politics Civil rights activists Congressional Gold Medal recipients Family of Aung San Fellows of St Hugh's College, Oxford Fellows of the Royal College of Surgeons of Edinburgh Female foreign ministers Female heads of government Foreign ministers of Myanmar Gandhians Heads of government who were later imprisoned Honorary companions of the Order of Australia International Simón Bolívar Prize recipients Lady Shri Ram College alumni Leaders ousted by a coup Living people Members of Pyithu Hluttaw National League for Democracy politicians Nobel Peace Prize laureates Nonviolence advocates Olof Palme Prize laureates Activists from Yangon Politicians from Yangon People stripped of honorary degrees Presidential Medal of Freedom recipients Prisoners and detainees of Myanmar Sakharov Prize laureates Women civil rights activists Women government ministers of Myanmar Women Nobel laureates Women opposition leaders
Aung San Suu Kyi
[ "Technology" ]
15,882
[ "Women Nobel laureates", "Women in science and technology" ]
2,861
https://en.wikipedia.org/wiki/Advertising
Advertising is the practice and techniques employed to bring attention to a product or service. Advertising aims to present a product or service in terms of utility, advantages and qualities of interest to consumers. It is typically used to promote a specific good or service, but there are a wide range of uses, the most common being commercial advertisement. Commercial advertisements often seek to generate increased consumption of their products or services through "branding", which associates a product name or image with certain qualities in the minds of consumers. On the other hand, ads that intend to elicit an immediate sale are known as direct-response advertising. Non-commercial entities that advertise more than consumer products or services include political parties, interest groups, religious organizations, and governmental agencies. Non-profit organizations may use free modes of persuasion, such as a public service announcement. Advertising may also help to reassure employees or shareholders that a company is viable or successful. In the 19th century, soap businesses were among the first to employ large-scale advertising campaigns. Thomas J. Barratt was hired by Pears to be its brand manager—the first of its kind—and in addition to creating slogans and images he recruited West End stage actress and socialite Lillie Langtry to become the poster-girl for Pears, making her the first celebrity to endorse a commercial product. Modern advertising originated with the techniques introduced with tobacco advertising in the 1920s, most significantly with the campaigns of Edward Bernays, considered the founder of modern, "Madison Avenue" advertising. Worldwide spending on advertising in 2015 amounted to an estimated . Advertising's projected distribution for 2017 was 40.4% on TV, 33.3% on digital, 9% on newspapers, 6.9% on magazines, 5.8% on outdoor and 4.3% on radio. Internationally, the largest ("Big Five") advertising agency groups are Omnicom, WPP, Publicis, Interpublic, and Dentsu. In Latin, advertere means "to turn towards". History Egyptians used papyrus to make sales messages and wall posters. Commercial messages and political campaign displays have been found in the ruins of Pompeii and ancient Arabia. Lost and found advertising on papyrus was common in ancient Greece and ancient Rome. Wall or rock painting for commercial advertising is another manifestation of an ancient advertising form, which is present to this day in many parts of Asia, Africa, and South America. The tradition of wall painting can be traced back to Indian rock art paintings that date back to 4000 BC. In ancient China, the earliest advertising known was oral, as recorded in the Classic of Poetry (11th to 7th centuries BC) of bamboo flutes played to sell confectionery. Advertisement usually takes the form of calligraphic signboards and inked papers. A copper printing plate dated back to the Song dynasty used to print posters in the form of a square sheet of paper with a rabbit logo with "Jinan Liu's Fine Needle Shop" and "We buy high-quality steel rods and make fine-quality needles, to be ready for use at home in no time" written above and below is considered the world's earliest identified printed advertising medium. In Europe, as the towns and cities of the Middle Ages began to grow, and the general population was unable to read, instead of signs that read "cobbler", "miller", "tailor", or "blacksmith", images associated with their trade would be used such as a boot, a suit, a hat, a clock, a diamond, a horseshoe, a candle or even a bag of flour. Fruits and vegetables were sold in the city square from the backs of carts and wagons and their proprietors used street callers (town criers) to announce their whereabouts. The first compilation of such advertisements was gathered in "Les Crieries de Paris", a thirteenth-century poem by Guillaume de la Villeneuve. 18th-19th century: Newspaper Advertising In the 18th century, advertisements started to appear in weekly newspapers in England. These early print advertisements were used mainly to promote books and newspapers, which became increasingly affordable with advances in the printing press; and medicines, which were increasingly sought after. However, false advertising and so-called "quack" advertisements became a problem, which ushered in the regulation of advertising content. In the United States, newspapers grew quickly in the first few decades of the 19th century, in part due to advertising. By 1822, the United States had more newspaper readers than any other country. About half of the content of these newspapers consisted of advertising, usually local advertising, with half of the daily newspapers in the 1810s using the word "advertiser" in their name. In August 1859, British pharmaceutical firm Beechams created a slogan for Beecham's Pills: "Beechams Pills: Worth a guinea a box", which is considered to be the world's first advertising slogan. The Beechams adverts would appear in newspapers all over the world, helping the company become a global brand. The phrase was said to be uttered by a satisfied lady purchaser from St Helens, Lancashire, the founder's hometown. In June 1836, the French newspaper La Presse was the first to include paid advertising in its pages, allowing it to lower its price, extend its readership and increase its profitability and the formula was soon copied by all titles. Around 1840, Volney B. Palmer established the roots of the modern day advertising agency in Philadelphia. In 1842 Palmer bought large amounts of space in various newspapers at a discounted rate then resold the space at higher rates to advertisers. The actual ad – the copy, layout, and artwork – was still prepared by the company wishing to advertise; in effect, Palmer was a space broker. The situation changed when the first full-service advertising agency of N.W. Ayer & Son was founded in 1869 in Philadelphia. Ayer & Son offered to plan, create, and execute complete advertising campaigns for its customers. By 1900 the advertising agency had become the focal point of creative planning, and advertising was firmly established as a profession. Around the same time, in France, Charles-Louis Havas extended the services of his news agency, Havas to include advertisement brokerage, making it the first French group to organize. At first, agencies were brokers for advertisement space in newspapers. Late 19th century: Modern Advertising The late 19th and early 20th centuries saw the rise of modern advertising, driven by industrialization and the growth of consumer goods. This era saw the dawn of ad agencies, employing more cunning methods— persuasive diction and psychological tactics. Thomas J. Barratt of London has been called "the father of modern advertising". Working for the Pears soap company, Barratt created an effective advertising campaign for the company products, which involved the use of targeted slogans, images and phrases. One of his slogans, "Good morning. Have you used Pears' soap?" was famous in its day and into the 20th century. In 1882, Barratt recruited English actress and socialite Lillie Langtry to become the poster girl for Pears, making her the first celebrity to endorse a commercial product. Becoming the company's brand manager in 1865, listed as the first of its kind by the Guinness Book of Records, Barratt introduced many of the crucial ideas that lie behind successful advertising and these were widely circulated in his day. He constantly stressed the importance of a strong and exclusive brand image for Pears and of emphasizing the product's availability through saturation campaigns. He also understood the importance of constantly reevaluating the market for changing tastes and mores, stating in 1907 that "tastes change, fashions change, and the advertiser has to change with them. An idea that was effective a generation ago would fall flat, stale, and unprofitable if presented to the public today. Not that the idea of today is always better than the older idea, but it is different – it hits the present taste." Enhanced advertising revenues was one effect of the Industrial Revolution in Britain. Thanks to the revolution and the consumers it created, by the mid-19th century biscuits and chocolate became products for the masses, and British biscuit manufacturers were among the first to introduce branding to distinguish grocery products. One the world's first global brands, Huntley & Palmers biscuits were sold in 172 countries in 1900, and their global reach was reflected in their advertisements. 20th century As a result of massive industrialization, advertising increased dramatically in the United States. In 1919 it was 2.5 percent of gross domestic product (GDP) in the US, and it averaged 2.2 percent of GDP between then and at least 2007, though it may have declined dramatically since the Great Recession. Industry could not benefit from its increased productivity without a substantial increase in consumer spending. This contributed to the development of mass marketing designed to influence the population's economic behavior on a larger scale. In the 1910s and 1920s, advertisers in the U.S. adopted the doctrine that human instincts could be targeted and harnessed – "sublimated" into the desire to purchase commodities. Edward Bernays, a nephew of Sigmund Freud, became associated with the method and is sometimes called the founder of modern advertising and public relations. Bernays claimed that:In other words, selling products by appealing to the rational minds of customers (the main method used prior to Bernays) was much less effective than selling products based on the unconscious desires that Bernays felt were the true motivators of human action. "Sex sells" became a controversial issue, with techniques for titillating and enlarging the audience posing a challenge to conventional morality. In the 1920s, under Secretary of Commerce Herbert Hoover, the American government promoted advertising. Hoover himself delivered an address to the Associated Advertising Clubs of the World in 1925 called 'Advertising Is a Vital Force in Our National Life." In October 1929, the head of the U.S. Bureau of Foreign and Domestic Commerce, Julius Klein, stated "Advertising is the key to world prosperity." This was part of the "unparalleled" collaboration between business and government in the 1920s, according to a 1933 European economic journal. The tobacco companies became major advertisers in order to sell packaged cigarettes. The tobacco companies pioneered the new advertising techniques when they hired Bernays to create positive associations with tobacco smoking. Advertising was also used as a vehicle for cultural assimilation, encouraging workers to exchange their traditional habits and community structure in favor of a shared "modern" lifestyle. An important tool for influencing immigrant workers was the American Association of Foreign Language Newspapers (AAFLN). The AAFLN was primarily an advertising agency but also gained heavily centralized control over much of the immigrant press. At the turn of the 20th century, advertising was one of the few career choices for women. Since women were responsible for most household purchasing done, advertisers and agencies recognized the value of women's insight during the creative process. In fact, the first American advertising to use a sexual sell was created by a woman – for a soap product. Although tame by today's standards, the advertisement featured a couple with the message "A skin you love to touch". In the 1920s, psychologists Walter D. Scott and John B. Watson contributed applied psychological theory to the field of advertising. Scott said, "Man has been called the reasoning animal but he could with greater truthfulness be called the creature of suggestion. He is reasonable, but he is to a greater extent suggestible". He demonstrated this through his advertising technique of a direct command to the consumer. Radio from the 1920s In the early 1920s, the first radio stations were established by radio equipment manufacturers, followed by non-profit organizations such as schools, clubs and civic groups who also set up their own stations. Retailer and consumer goods manufacturers quickly recognized radio's potential to reach consumers in their home and soon adopted advertising techniques that would allow their messages to stand out; slogans, mascots, and jingles began to appear on radio in the 1920s and early television in the 1930s. The rise of mass media communications allowed manufacturers of branded goods to bypass retailers by advertising directly to consumers. This was a major paradigm shift which forced manufacturers to focus on the brand and stimulated the need for superior insights into consumer purchasing, consumption and usage behavior; their needs, wants and aspirations. The earliest radio drama series were sponsored by soap manufacturers and the genre became known as a soap opera. Before long, radio station owners realized they could increase advertising revenue by selling 'air-time' in small time allocations which could be sold to multiple businesses. By the 1930s, these advertising spots, as the packets of time became known, were being sold by the station's geographical sales representatives, ushering in an era of national radio advertising. By the 1940s, manufacturers began to recognize the way in which consumers were developing personal relationships with their brands in a social/psychological/anthropological sense. Advertisers began to use motivational research and consumer research to gather insights into consumer purchasing. Strong branded campaigns for Chrysler and Exxon/Esso, using insights drawn research methods from psychology and cultural anthropology, led to some of the most enduring campaigns of the 20th century. Commercial television in the 1950s In the early 1950s, the DuMont Television Network began the modern practice of selling advertisement time to multiple sponsors. Previously, DuMont had trouble finding sponsors for many of their programs and compensated by selling smaller blocks of advertising time to several businesses. This eventually became the standard for the commercial television industry in the United States. However, it was still a common practice to have single sponsor shows, such as The United States Steel Hour. In some instances the sponsors exercised great control over the content of the show – up to and including having one's advertising agency actually writing the show. The single sponsor model is much less prevalent now, a notable exception being the Hallmark Hall of Fame. Cable television from the 1980s The late 1980s and early 1990s saw the introduction of cable television and particularly MTV. Pioneering the concept of the music video, MTV ushered in a new type of advertising: the consumer tunes in for the advertising message, rather than it being a by-product or afterthought. As cable and satellite television became increasingly prevalent, specialty channels emerged, including channels entirely devoted to advertising, such as QVC, Home Shopping Network, and ShopTV Canada. Internet from the 1990s With the advent of the ad server, online advertising grew, contributing to the "dot-com" boom of the 1990s. Entire corporations operated solely on advertising revenue, offering everything from coupons to free Internet access. At the turn of the 21st century, some websites, including the search engine Google, changed online advertising by personalizing ads based on web browsing behavior. This has led to other similar efforts and an increase in interactive advertising. Online advertising introduced new opportunities for targeting and engagement, with platforms like Google and Facebook leading the charge. This shift has significantly altered the advertising landscape, making digital advertising a dominant force in the industry. The share of advertising spending relative to GDP has changed little across large changes in media since 1925. In 1925, the main advertising media in America were newspapers, magazines, signs on streetcars, and outdoor posters. Advertising spending as a share of GDP was about 2.9 percent. By 1998, television and radio had become major advertising media; by 2017, the balance between broadcast and online advertising had shifted, with online spending exceeding broadcast. Nonetheless, advertising spending as a share of GDP was slightly lower – about 2.4 percent. Guerrilla marketing involves unusual approaches such as staged encounters in public places, giveaways of products such as cars that are covered with brand messages, and interactive advertising where the viewer can respond to become part of the advertising message. This type of advertising is unpredictable, which causes consumers to buy the product or idea. This reflects an increasing trend of interactive and "embedded" ads, such as via product placement, having consumers vote through text messages, and various campaigns utilizing social network services such as Facebook or Twitter. The advertising business model has also been adapted in recent years. In media for equity, advertising is not sold, but provided to start-up companies in return for equity. If the company grows and is sold, the media companies receive cash for their shares. Domain name registrants (usually those who register and renew domains as an investment) sometimes "park" their domains and allow advertising companies to place ads on their sites in return for per-click payments. These ads are typically driven by pay per click search engines like Google or Yahoo, but ads can sometimes be placed directly on targeted domain names through a domain lease or by making contact with the registrant of a domain name that describes a product. Domain name registrants are generally easy to identify through WHOIS records that are publicly available at registrar websites. Classification Advertising may be categorized in a variety of ways, including by style, target audience, geographic scope, medium, or purpose. For example, in print advertising, classification by style can include display advertising (ads with design elements sold by size) vs. classified advertising (ads without design elements sold by the word or line). Advertising may be local, national or global. An ad campaign may be directed toward consumers or to businesses. The purpose of an ad may be to raise awareness (brand advertising), or to elicit an immediate sale (direct response advertising). The term above the line (ATL) is used for advertising involving mass media; more targeted forms of advertising and promotion are referred to as below the line (BTL). The two terms date back to 1954 when Procter & Gamble began paying their advertising agencies differently from other promotional agencies. In the 2010s, as advertising technology developed, a new term, through the line (TTL) began to come into use, referring to integrated advertising campaigns. Traditional media Virtually any medium can be used for advertising. Commercial advertising media can include wall paintings, billboards, street furniture components, printed flyers and rack cards, radio, cinema and television adverts, web banners, mobile telephone screens, shopping carts, web popups, skywriting, bus stop benches, human billboards and forehead advertising, magazines, newspapers, town criers, sides of buses, banners attached to or sides of airplanes ("logojets"), in-flight advertisements on seatback tray tables or overhead storage bins, taxicab doors, roof mounts and passenger screens, musical stage shows, subway platforms and trains, elastic bands on disposable diapers, doors of bathroom stalls, stickers on apples in supermarkets, shopping cart handles (grabertising), the opening section of streaming audio and video, posters, and the backs of event tickets and supermarket receipts. Any situation in which an "identified" sponsor pays to deliver their message through a medium is advertising. Television Television advertising is one of the most expensive types of advertising; networks charge large amounts for commercial airtime during popular events. The annual Super Bowl football game in the United States is known as the most prominent advertising event on television – with an audience of over 108 million and studies showing that 50% of those only tuned in to see the advertisements. During the 2014 edition of this game, the average thirty-second ad cost US$4 million, and $8 million was charged for a 60-second spot. Virtual advertisements may be inserted into regular programming through computer graphics. It is typically inserted into otherwise blank backdrops or used to replace local billboards that are not relevant to the remote broadcast audience. Virtual billboards may be inserted into the background where none exist in real-life. This technique is especially used in televised sporting events. Virtual product placement is also possible. An infomercial is a long-format television commercial, typically five minutes or longer. The name blends the words "information" and "commercial". The main objective in an infomercial is to create an impulse purchase, so that the target sees the presentation and then immediately buys the product through the advertised toll-free telephone number or website. Infomercials describe and often demonstrate products, and commonly have testimonials from customers and industry professionals. Radio Radio advertisements are broadcast as radio waves to the air from a transmitter to an antenna and a thus to a receiving device. Airtime is purchased from a station or network in exchange for airing the commercials. While radio has the limitation of being restricted to sound, proponents of radio advertising often cite this as an advantage. Radio is an expanding medium that can be found on air, and also online. According to Arbitron, radio has approximately 241.6 million weekly listeners, or more than 93 percent of the U.S. population. Online Online advertising is a form of promotion that uses the Internet and World Wide Web for the expressed purpose of delivering marketing messages to attract customers. Online ads are delivered by an ad server. Examples of online advertising include contextual ads that appear on search engine results pages, banner ads, in pay per click text ads, rich media ads, Social network advertising, online classified advertising, advertising networks and e-mail marketing, including e-mail spam. A newer form of online advertising is Native Ads; they go in a website's news feed and are supposed to improve user experience by being less intrusive. However, some people argue this practice is deceptive. Domain names Domain name advertising is most commonly done through pay per click web search engines, however, advertisers often lease space directly on domain names that generically describe their products. When an Internet user visits a website by typing a domain name directly into their web browser, this is known as "direct navigation", or "type in" web traffic. Although many Internet users search for ideas and products using search engines and mobile phones, a large number of users around the world still use the address bar. They will type a keyword into the address bar such as "geraniums" and add ".com" to the end of it. Sometimes they will do the same with ".org" or a country-code Top Level Domain (TLD such as ".co.uk" for the United Kingdom or ".ca" for Canada). When Internet users type in a generic keyword and add .com or another top-level domain (TLD) ending, it produces a targeted sales lead. Domain name advertising was originally developed by Oingo (later known as Applied Semantics), one of Google's early acquisitions. Product placements is when a product or brand is embedded in entertainment and media. For example, in a film, the main character can use an item or other of a definite brand, as in the movie Minority Report, where Tom Cruise's character John Anderton owns a phone with the Nokia logo clearly written in the top corner, or his watch engraved with the Bulgari logo. Another example of advertising in film is in I, Robot, where main character played by Will Smith mentions his Converse shoes several times, calling them "classics", because the film is set far in the future. I, Robot and Spaceballs also showcase futuristic cars with the Audi and Mercedes-Benz logos clearly displayed on the front of the vehicles. Cadillac chose to advertise in the movie The Matrix Reloaded, which as a result contained many scenes in which Cadillac cars were used. Similarly, product placement for Omega Watches, Ford, VAIO, BMW and Aston Martin cars are featured in recent James Bond films, most notably Casino Royale. In "Fantastic Four: Rise of the Silver Surfer", the main transport vehicle shows a large Dodge logo on the front. Blade Runner includes some of the most obvious product placement; the whole film stops to show a Coca-Cola billboard. Print Print advertising describes advertising in a printed medium such as a newspaper, magazine, or trade journal. This encompasses everything from media with a very broad readership base, such as a major national newspaper or magazine, to more narrowly targeted media such as local newspapers and trade journals on very specialized topics. One form of print advertising is classified advertising, which allows private individuals or companies to purchase a small, narrowly targeted ad paid by the word or line. Another form of print advertising is the display ad, which is generally a larger ad with design elements that typically run in an article section of a newspaper. Outdoor Billboards, also known as hoardings in some parts of the world, are large structures located in public places which display advertisements to passing pedestrians and motorists. Most often, they are located on main roads with a large amount of passing motor and pedestrian traffic; however, they can be placed in any location with large numbers of viewers, such as on mass transit vehicles and in stations, in shopping malls or office buildings, and in stadiums. The form known as street advertising first came to prominence in the UK by Street Advertising Services to create outdoor advertising on street furniture and pavements. Working with products such as Reverse Graffiti, air dancers and 3D pavement advertising, for getting brand messages out into public spaces. Sheltered outdoor advertising combines outdoor with indoor advertisement by placing large mobile, structures (tents) in public places on temporary bases. The large outer advertising space aims to exert a strong pull on the observer, the product is promoted indoors, where the creative decor can intensify the impression. Mobile billboards are generally vehicle mounted billboards or digital screens. These can be on dedicated vehicles built solely for carrying advertisements along routes preselected by clients, they can also be specially equipped cargo trucks or, in some cases, large banners strewn from planes. The billboards are often lighted; some being backlit, and others employing spotlights. Some billboard displays are static, while others change; for example, continuously or periodically rotating among a set of advertisements. Mobile displays are used for various situations in metropolitan areas throughout the world, including: target advertising, one-day and long-term campaigns, conventions, sporting events, store openings and similar promotional events, and big advertisements from smaller companies. Point-of-sale In-store advertising is any advertisement placed in a retail store. It includes placement of a product in visible locations in a store, such as at eye level, at the ends of aisles and near checkout counters (a.k.a. POP – point of purchase display), eye-catching displays promoting a specific product, and advertisements in such places as shopping carts and in-store video displays. Novelties Advertising printed on small tangible items such as coffee mugs, T-shirts, pens, bags, and such is known as novelty advertising. Some printers specialize in printing novelty items, which can then be distributed directly by the advertiser, or items may be distributed as part of a cross-promotion, such as ads on fast food containers. Celebrity endorsements Advertising in which a celebrity endorses a product or brand leverages celebrity power, fame, money, and popularity to gain recognition for their products or to promote specific stores' or products. Advertisers often advertise their products, for example, when celebrities share their favorite products or wear clothes by specific brands or designers. Celebrities are often involved in advertising campaigns such as television or print adverts to advertise specific or general products. The use of celebrities to endorse a brand can have its downsides, however; one mistake by a celebrity can be detrimental to the public relations of a brand. For example, following his performance of eight gold medals at the 2008 Olympic Games in Beijing, China, swimmer Michael Phelps' contract with Kellogg's was terminated, as Kellogg's did not want to associate with him after he was photographed smoking marijuana. Celebrities such as Britney Spears have advertised for multiple products including Pepsi, Candies from Kohl's, Twister, NASCAR, and Toyota. Aerial Using aircraft, balloons or airships to create or display advertising media. Skywriting is a notable example. New media approaches A new advertising approach is known as advanced advertising, which is data-driven advertising, using large quantities of data, precise measuring tools and precise targeting. Advanced advertising also makes it easier for companies which sell ad space to attribute customer purchases to the ads they display or broadcast. Increasingly, other media are overtaking many of the "traditional" media such as television, radio and newspaper because of a shift toward the usage of the Internet for news and music as well as devices like digital video recorders (DVRs) such as TiVo. Online advertising began with unsolicited bulk e-mail advertising known as "e-mail spam". Spam has been a problem for e-mail users since 1978. As new online communication channels became available, advertising followed. The first banner ad appeared on the World Wide Web in 1994. Prices of Web-based advertising space are dependent on the "relevance" of the surrounding web content and the traffic that the website receives. In online display advertising, display ads generate awareness quickly. Unlike search, which requires someone to be aware of a need, display advertising can drive awareness of something new and without previous knowledge. Display works well for direct response. The display is not only used for generating awareness, it is used for direct response campaigns that link to a landing page with a clear 'call to action'. As the mobile phone became a new mass medium in 1998 when the first paid downloadable content appeared on mobile phones in Finland, mobile advertising followed, also first launched in Finland in 2000. By 2007 the value of mobile advertising had reached $2 billion and providers such as Admob delivered billions of mobile ads. More advanced mobile ads include banner ads, coupons, Multimedia Messaging Service picture and video messages, advergames and various engagement marketing campaigns. A particular feature driving mobile ads is the 2D barcode, which replaces the need to do any typing of web addresses, and uses the camera feature of modern phones to gain immediate access to web content. 83 percent of Japanese mobile phone users already are active users of 2D barcodes. Some companies have proposed placing messages or corporate logos on the side of booster rockets and the International Space Station. Unpaid advertising (also called "publicity advertising"), can include personal recommendations ("bring a friend", "sell it"), spreading buzz, or achieving the feat of equating a brand with a common noun (in the United States, "Xerox" = "photocopier", "Kleenex" = tissue, "Vaseline" = petroleum jelly, "Hoover" = vacuum cleaner, and "Band-Aid" = adhesive bandage). However, some companies oppose the use of their brand name to label an object. Equating a brand with a common noun also risks turning that brand into a generic trademark – turning it into a generic term which means that its legal protection as a trademark is lost. Early in its life, The CW aired short programming breaks called "Content Wraps", to advertise one company's product during an entire commercial break. The CW pioneered "content wraps" and some products featured were Herbal Essences, Crest, Guitar Hero II, CoverGirl, and Toyota. A new promotion concept has appeared, "ARvertising", advertising on augmented reality technology. Controversy exists on the effectiveness of subliminal advertising (see mind control), and the pervasiveness of mass messages (propaganda). Rise in new media With the Internet came many new advertising opportunities. Pop-up, Flash, banner, pop-under, advergaming, and email advertisements (all of which are often unwanted or spam in the case of email) are now commonplace. Particularly since the rise of "entertaining" advertising, some people may like an advertisement enough to wish to watch it later or show a friend. In general, the advertising community has not yet made this easy, although some have used the Internet to widely distribute their ads to anyone willing to see or hear them. In the last three quarters of 2009, mobile and Internet advertising grew by 18% and 9% respectively, while older media advertising saw declines: −10.1% (TV), −11.7% (radio), −14.8% (magazines) and −18.7% (newspapers). Between 2008 and 2014, U.S. newspapers lost more than half their print advertising revenue. Niche marketing Another significant trend regarding future of advertising is the growing importance of the niche market using niche or targeted ads. Also brought about by the Internet and the theory of the long tail, advertisers will have an increasing ability to reach specific audiences. In the past, the most efficient way to deliver a message was to blanket the largest mass market audience possible. However, usage tracking, customer profiles and the growing popularity of niche content brought about by everything from blogs to social networking sites, provide advertisers with audiences that are smaller but much better defined, leading to ads that are more relevant to viewers and more effective for companies' marketing products. Among others, Comcast Spotlight is one such advertiser employing this method in their video on demand menus. These advertisements are targeted to a specific group and can be viewed by anyone wishing to find out more about a particular business or practice, from their home. This causes the viewer to become proactive and actually choose what advertisements they want to view. Niche marketing could also be helped by bringing the issue of color into advertisements. Different colors play major roles when it comes to marketing strategies, for example, seeing the blue can promote a sense of calmness and gives a sense of security which is why many social networks such as Facebook use blue in their logos. Google AdSense is an example of niche marketing. Google calculates the primary purpose of a website and adjusts ads accordingly; it uses keywords on the page (or even in emails) to find the general ideas of topics disused and places ads that will most likely be clicked on by viewers of the email account or website visitors. Crowdsourcing The concept of crowdsourcing has given way to the trend of user-generated advertisements. User-generated ads are created by people, as opposed to an advertising agency or the company themselves, often resulting from brand sponsored advertising competitions. For the 2007 Super Bowl, the Frito-Lays division of PepsiCo held the "Crash the Super Bowl" contest, allowing people to create their own Doritos commercials. Chevrolet held a similar competition for their Tahoe line of SUVs. Due to the success of the Doritos user-generated ads in the 2007 Super Bowl, Frito-Lays relaunched the competition for the 2009 and 2010 Super Bowl. The resulting ads were among the most-watched and most-liked Super Bowl ads. In fact, the winning ad that aired in the 2009 Super Bowl was ranked by the USA Today Super Bowl Ad Meter as the top ad for the year while the winning ads that aired in the 2010 Super Bowl were found by Nielsen's BuzzMetrics to be the "most buzzed-about". Another example of companies using crowdsourcing successfully is the beverage company Jones Soda that encourages consumers to participate in the label design themselves. This trend has given rise to several online platforms that host user-generated advertising competitions on behalf of a company. Founded in 2007, Zooppa has launched ad competitions for brands such as Google, Nike, Hershey's, General Mills, Microsoft, NBC Universal, Zinio, and Mini Cooper. Crowdsourcing remains controversial, as the long-term impact on the advertising industry is still unclear. Globalization Advertising has gone through five major stages of development: domestic, export, international, multi-national, and global. For global advertisers, there are four, potentially competing, business objectives that must be balanced when developing worldwide advertising: building a brand while speaking with one voice, developing economies of scale in the creative process, maximizing local effectiveness of ads, and increasing the company's speed of implementation. Born from the evolutionary stages of global marketing are the three primary and fundamentally different approaches to the development of global advertising executions: exporting executions, producing local executions, and importing ideas that travel. Advertising research is key to determining the success of an ad in any country or region. The ability to identify which elements and/or moments of an ad contribute to its success is how economies of scale are maximized. Once one knows what works in an ad, that idea or ideas can be imported by any other market. Market research measures, such as Flow of Attention, Flow of Emotion and branding moments provide insight into what is working in an ad in any country or region because the measures are based on the visual, not verbal, elements of the ad. Foreign public messaging Foreign governments, particularly those that own marketable commercial products or services, often promote their interests and positions through the advertising of those goods because the target audience is not only largely unaware of the forum as a vehicle for foreign messaging but also willing to receive the message while in a mental state of absorbing information from advertisements during television commercial breaks, while reading a periodical, or while passing by billboards in public spaces. A prime example of this messaging technique is advertising campaigns to promote international travel. While advertising foreign destinations and services may stem from the typical goal of increasing revenue by drawing more tourism, some travel campaigns carry the additional or alternative intended purpose of promoting good sentiments or improving existing ones among the target audience towards a given nation or region. It is common for advertising promoting foreign countries to be produced and distributed by the tourism ministries of those countries, so these ads often carry political statements and/or depictions of the foreign government's desired international public perception. Additionally, a wide range of foreign airlines and travel-related services which advertise separately from the destinations, themselves, are owned by their respective governments; examples include, though are not limited to, the Emirates airline (Dubai), Singapore Airlines (Singapore), Qatar Airways (Qatar), China Airlines (Taiwan/Republic of China), and Air China (People's Republic of China). By depicting their destinations, airlines, and other services in a favorable and pleasant light, countries market themselves to populations abroad in a manner that could mitigate prior public impressions. Diversification In the realm of advertising agencies, continued industry diversification has seen observers note that "big global clients don't need big global agencies any more". This is reflected by the growth of non-traditional agencies in various global markets, such as Canadian business TAXI and SMART in Australia and has been referred to as "a revolution in the ad world". New technology The ability to record shows on digital video recorders (such as TiVo) allow watchers to record the programs for later viewing, enabling them to fast forward through commercials. Additionally, as more seasons of pre-recorded box sets are offered for sale of television programs; fewer people watch the shows on TV. However, the fact that these sets are sold, means the company will receive additional profits from these sets. To counter this effect, a variety of strategies have been employed. Many advertisers have opted for product placement on TV shows like Survivor. Other strategies include integrating advertising with internet-connected program guidess (EPGs), advertising on companion devices (like smartphones and tablets) during the show, and creating mobile apps for TV programs. Additionally, some like brands have opted for social television sponsorship. The emerging technology of drone displays has recently been used for advertising purposes. Education In recent years there have been several media literacy initiatives, and more specifically concerning advertising, that seek to empower citizens in the face of media advertising campaigns. Advertising education has become popular with bachelor, master and doctorate degrees becoming available in the emphasis. A surge in advertising interest is typically attributed to the strong relationship advertising plays in cultural and technological changes, such as the advance of online social networking. A unique model for teaching advertising is the student-run advertising agency, where advertising students create campaigns for real companies. Organizations such as the American Advertising Federation establish companies with students to create these campaigns. Purposes Advertising is at the front of delivering the proper message to customers and prospective customers. The purpose of advertising is to inform the consumers about their product and convince customers that a company's services or products are the best, enhance the image of the company, point out and create a need for products or services, demonstrate new uses for established products, announce new products and programs, reinforce the salespeople's individual messages, draw customers to the business, and to hold existing customers. Sales promotions and brand loyalty Sales promotions are another way to advertise. Sales promotions are double purposed because they are used to gather information about what type of customers one draws in and where they are, and to jump start sales. Sales promotions include things like contests and games, sweepstakes, product giveaways, samples coupons, loyalty programs, and discounts. The ultimate goal of sales promotions is to stimulate potential customers to action. Criticisms While advertising can be seen as necessary for economic growth, it is not without social costs. Unsolicited commercial e-mail and other forms of spam have become so prevalent as to have become a major nuisance to users of these services, as well as being a financial burden on internet service providers. Advertising is increasingly invading public spaces, such as schools, which some critics argue is a form of child exploitation. This increasing difficulty in limiting exposure to specific audiences can result in negative backlash for advertisers. In tandem with these criticisms, the advertising industry has seen low approval rates in surveys and negative cultural portrayals. A 2021 study found that for more than 80% of brands, advertising had a negative return on investment. Unsolicited ads have been criticized as attention theft. One of the most controversial criticisms of advertisement in the present day is that of the predominance of advertising of foods high in sugar, fat, and salt specifically to children. Critics claim that food advertisements targeting children are exploitive and are not sufficiently balanced with proper nutritional education to help children understand the consequences of their food choices. Additionally, children may not understand that they are being sold something, and are therefore more impressionable. Michelle Obama has criticized large food companies for advertising unhealthy foods largely towards children and has requested that food companies either limit their advertising to children or advertise foods that are more in line with dietary guidelines. The other criticisms include the change that are brought by those advertisements on the society and also the deceiving ads that are aired and published by the corporations. Cosmetic and health industry are the ones which exploited the highest and created reasons of concern. Political advertisement and their regulations have been scrutinized for misinformation, ethics and political bias. Regulation There have been increasing efforts to protect the public interest by regulating the content and the influence of advertising. Some examples include restrictions for advertising alcohol, tobacco or gambling imposed in many countries, as well as the bans around advertising to children, which exist in parts of Europe. Advertising regulation focuses heavily on the veracity of the claims and as such, there are often tighter restrictions placed around advertisements for food and healthcare products. The advertising industries within some countries rely less on laws and more on systems of self-regulation. Advertisers and the media agree on a code of advertising standards that they attempt to uphold. The general aim of such codes is to ensure that any advertising is 'legal, decent, honest and truthful'. Some self-regulatory organizations are funded by the industry, but remain independent, with the intent of upholding the standards or codes like the Advertising Standards Authority in the UK. In the UK, most forms of outdoor advertising, such as the display of billboards, are regulated by the UK Town and County Planning system. The display of an advertisement without consent from the Planning Authority is a criminal offense liable to a fine of £2,500 per offense. In the US, where some communities believe that outdoor advertising are a blight on landscapes, attempts to ban billboard advertising in the open countryside occurred in the 1960s, leading to the Highway Beautification Act. Cities such as São Paulo have introduced an outright ban, with London also having specific legislation to control unlawful displays. Some governments restrict the languages that can be used in advertisements, but advertisers may employ tricks to try avoiding them. In France for instance, advertisers sometimes print English words in bold and French translations in fine print to deal with Article 120 of the 1994 Toubon Law limiting the use of English. The advertising of pricing information is another topic of concern for governments. In the United States for instance, it is common for businesses to only mention the existence and amount of applicable taxes at a later stage of a transaction. In Canada and New Zealand, taxes can be listed as separate items, as long as they are quoted up-front. In most other countries, the advertised price must include all applicable taxes, enabling customers to easily know how much it will cost them. Theory Hierarchy-of-effects models Various competing models of hierarchies of effects attempt to provide a theoretical underpinning to advertising practice. The model of Clow and Baack clarifies the objectives of an advertising campaign and for each individual advertisement. The model postulates six steps a buyer moves through when making a purchase: Awareness Knowledge Liking Preference Conviction Purchase Means-end theory suggests that an advertisement should contain a message or means that leads the consumer to a desired end-state. Leverage points aim to move the consumer from understanding a product's benefits to linking those benefits with personal values. Marketing mix The marketing mix was proposed by professor E. Jerome McCarthy in the 1960s. It consists of four basic elements called the "four Ps". Product is the first P representing the actual product. Price represents the process of determining the value of a product. Place represents the variables of getting the product to the consumer such as distribution channels, market coverage and movement organization. The last P stands for Promotion which is the process of reaching the target market and convincing them to buy the product. In the 1990s, the concept of four Cs was introduced as a more customer-driven replacement of four P's. There are two theories based on four Cs: Lauterborn's four Cs (consumer, cost, communication, convenience) and Shimizu's four Cs (commodity, cost, communication, channel) in the 7Cs Compass Model (Co-marketing). Communications can include advertising, sales promotion, public relations, publicity, personal selling, corporate identity, internal communication, SNS, and MIS. Research Advertising research is a specialized form of research that works to improve the effectiveness and efficiency of advertising. It entails numerous forms of research which employ different methodologies. Advertising research includes pre-testing (also known as copy testing) and post-testing of ads and/or campaigns. Pre-testing includes a wide range of qualitative and quantitative techniques, including: focus groups, in-depth target audience interviews (one-on-one interviews), small-scale quantitative studies and physiological measurement. The goal of these investigations is to better understand how different groups respond to various messages and visual prompts, thereby providing an assessment of how well the advertisement meets its communications goals. Post-testing employs many of the same techniques as pre-testing, usually with a focus on understanding the change in awareness or attitude attributable to the advertisement. With the emergence of digital advertising technologies, many firms have begun to continuously post-test ads using real-time data. This may take the form of A/B split-testing or multivariate testing. Continuous ad tracking and the Communicus System are competing examples of post-testing advertising research types. Semiotics Meanings between consumers and marketers depict signs and symbols that are encoded in everyday objects. Semiotics is the study of signs and how they are interpreted. Advertising has many hidden signs and meanings within brand names, logos, package designs, print advertisements, and television advertisements. Semiotics aims to study and interpret the message being conveyed in (for example) advertisements. Logos and advertisements can be interpreted at two levels – known as the surface level and the underlying level. The surface level uses signs creatively to create an image or personality for a product. These signs can be images, words, fonts, colors, or slogans. The underlying level is made up of hidden meanings. The combination of images, words, colors, and slogans must be interpreted by the audience or consumer. The "key to advertising analysis" is the signifier and the signified. The signifier is the object and the signified is the mental concept. A product has a signifier and a signified. The signifier is the color, brand name, logo design, and technology. The signified has two meanings known as denotative and connotative. The denotative meaning is the meaning of the product. A television's denotative meaning might be that it is high definition. The connotative meaning is the product's deep and hidden meaning. A connotative meaning of a television would be that it is top-of-the-line. Apple's commercials used a black silhouette of a person that was the age of Apple's target market. They placed the silhouette in front of a blue screen so that the picture behind the silhouette could be constantly changing. However, the one thing that stays the same in these ads is that there is music in the background and the silhouette is listening to that music on a white iPod through white headphones. Through advertising, the white color on a set of earphones now signifies that the music device is an iPod. The white color signifies almost all of Apple's products. The semiotics of gender plays a key influence on the way in which signs are interpreted. When considering gender roles in advertising, individuals are influenced by three categories. Certain characteristics of stimuli may enhance or decrease the elaboration of the message (if the product is perceived as feminine or masculine). Second, the characteristics of individuals can affect attention and elaboration of the message (traditional or non-traditional gender role orientation). Lastly, situational factors may be important to influence the elaboration of the message. There are two types of marketing communication claims-objective and subjective. Objective claims stem from the extent to which the claim associates the brand with a tangible product or service feature. For instance, a camera may have auto-focus features. Subjective claims convey emotional, subjective, impressions of intangible aspects of a product or service. They are non-physical features of a product or service that cannot be directly perceived, as they have no physical reality. For instance the brochure has a beautiful design. Males tend to respond better to objective marketing-communications claims while females tend to respond better to subjective marketing communications claims. Voiceovers are commonly used in advertising. Most voiceovers are done by men, with figures of up to 94% having been reported. There have been more female voiceovers in recent years, but mainly for food, household products, and feminine-care products. Gender effects on comprehension According to a 1977 study by David Statt, females process information comprehensively, while males process information through heuristic devices such as procedures, methods or strategies for solving problems, which could have an effect on how they interpret advertising. According to this study, men prefer to have available and apparent cues to interpret the message, whereas females engage in more creative, associative, imagery-laced interpretation. Later research by a Danish team found that advertising attempts to persuade men to improve their appearance or performance, whereas its approach to women aims at transformation toward an impossible ideal of female presentation. In Paul Suggett's article "The Objectification of Women in Advertising" he discusses the negative impact that these women in advertisements, who are too perfect to be real, have on women, as well as men, in real life. Advertising's manipulation of women's aspiration to these ideal types as portrayed in film, in erotic art, in advertising, on stage, within music videos and through other media exposures requires at least a conditioned rejection of female reality and thereby takes on a highly ideological cast. Studies show that these expectations of women and young girls negatively affect their views about their bodies and appearances. These advertisements are directed towards men. Not everyone agrees: one critic viewed this monologic, gender-specific interpretation of advertising as excessively skewed and politicized. There are some companies like Dove and aerie that are creating commercials to portray more natural women, with less post production manipulation, so more women and young girls are able to relate to them. More recent research by Martin (2003) reveals that males and females differ in how they react to advertising depending on their mood at the time of exposure to the ads and on the affective tone of the advertising. When feeling sad, males prefer happy ads to boost their mood. In contrast, females prefer happy ads when they are feeling happy. The television programs in which ads are embedded influence a viewer's mood state. Susan Wojcicki, author of the article "Ads that Empower Women don't just Break Stereotypes—They're also Effective" discusses how advertising to women has changed since the first Barbie commercial, where a little girl tells the doll that, she wants to be just like her. Little girls grow up watching advertisements of scantily clad women advertising things from trucks to burgers and Wojcicki states that this shows girls that they are either arm candy or eye candy. Alternatives Other approaches to revenue include donations, paid subscriptions, microtransactions, and data monetization. Websites and applications are "ad-free" when not using advertisements at all for revenue. For example, the online encyclopedia Wikipedia provides free content by receiving funding from charitable donations. "Fathers" of advertising Late 1700s – Benjamin Franklin (1706–1790) – "father of advertising in America" Late 1800s – Thomas J. Barratt (1841–1914) of London – called "the father of modern advertising" by T.F.G. Coates Early 1900s – J. Henry ("Slogan") Smythe Jr of Philadelphia – "world's best known slogan writer" Early 1900s – Albert Lasker (1880–1952) – the "father of modern advertising"; defined advertising as "salesmanship in print, driven by a reason why" Influential thinkers in advertising theory and practice N. W. Ayer & Son – probably the first advertising agency to use mass media (i.e. telegraph) in a promotional campaign Claude C. Hopkins (1866–1932) – popularised the use of test campaigns, especially coupons in direct mail, to track the efficiency of marketing spend Ernest Dichter (1907–1991) – developed the field of motivational research, used extensively in advertising E. St. Elmo Lewis (1872–1948) – developed the first hierarchy of effects model (AIDA) used in sales and advertising Arthur Nielsen (1897–1980) – founded one of the earliest international advertising agencies and developed ratings for radio & TV David Ogilvy (1911–1999) – pioneered the positioning concept and advocated of the use of brand image in advertising Charles Coolidge Parlin (1872–1942) – regarded as the pioneer of the use of marketing research in advertising Rosser Reeves (1910–1984) – developed the concept of the unique selling proposition (USP) and advocated the use of repetition in advertising Al Ries (1926–2022) – advertising executive, author and credited with coining the term "positioning" in the late 1960s Daniel Starch (1883–1979) – developed the Starch score method of measuring print media effectiveness (still in use) J Walter Thompson – one of the earliest advertising agencies See also Advertisements in schools Advertorial Annoyance factor Bibliography of advertising Branded content Commercial speech Comparative advertising Conquesting Copywriting Demo mode Direct-to-consumer advertising Family in advertising Graphic design Gross rating point History of Advertising Trust Informative advertising Integrated marketing communications List of advertising awards Local advertising Market overhang Media planning Meta-advertising Mobile marketing Performance-based advertising Promotional mix Senior media creative Shock advertising Viral marketing World Federation of Advertisers References Notes Further reading Arens, William, and Michael Weigold. Contemporary Advertising: And Integrated Marketing Communications (2012) Belch, George E., and Michael A. Belch. Advertising and Promotion: An Integrated Marketing Communications Perspective (10th ed. 2014) Biocca, Frank. Television and Political Advertising: Volume I: Psychological Processes (Routledge, 2013) Chandra, Ambarish, and Ulrich Kaiser. "Targeted advertising in magazine markets and the advent of the internet." Management Science 60.7 (2014) pp: 1829–1843. Chen, Yongmin, and Chuan He. "Paid placement: Advertising and search on the internet*." The Economic Journal 121#556 (2011): F309–F328. online Johnson-Cartee, Karen S., and Gary Copeland. Negative political advertising: Coming of age (2013) McAllister, Matthew P. and Emily West, eds. HardcoverThe Routledge Companion to Advertising and Promotional Culture (2013) McFall, Elizabeth Rose Advertising: a cultural economy (2004), cultural and sociological approaches to advertising Moriarty, Sandra, and Nancy Mitchell. Advertising & IMC: Principles and Practice (10th ed. 2014) Okorie, Nelson. The Principles of Advertising: concepts and trends in advertising (2011) Reichert, Tom, and Jacqueline Lambiase, eds. Sex in advertising: Perspectives on the erotic appeal (Routledge, 2014) Sheehan, Kim Bartel. Controversies in contemporary advertising (Sage Publications, 2013) Vestergaard, Torben and Schrøder, Kim. The Language of Advertising. Oxford: Basil Blackwell, 1985. Splendora, Anthony. "Discourse", a Review of Vestergaard and Schrøder, The Language of Advertising in Language in Society Vol. 15, No. 4 (Dec. 1986), pp. 445–449 History Brandt, Allan. The Cigarette Century (2009) Crawford, Robert. But Wait, There's More!: A History of Australian Advertising, 1900–2000 (2008) Ewen, Stuart. Captains of Consciousness: Advertising and the Social Roots of Consumer Culture. New York: McGraw-Hill, 1976. Fox, Stephen R. The mirror makers: A history of American advertising and its creators (University of Illinois Press, 1984) Friedman, Walter A. Birth of a Salesman (Harvard University Press, 2005), In the United States Jacobson, Lisa. Raising consumers: Children and the American mass market in the early twentieth century (Columbia University Press, 2013) Jamieson, Kathleen Hall. Packaging the presidency: A history and criticism of presidential campaign advertising (Oxford University Press, 1996) Laird, Pamela Walker. Advertising progress: American business and the rise of consumer marketing (Johns Hopkins University Press, 2001.) Lears, Jackson. Fables of abundance: A cultural history of advertising in America (1995) Liguori, Maria Chiara. "North and South: Advertising Prosperity in the Italian Economic Boom Years." Advertising & Society Review (2015) 15#4 Meyers, Cynthia B. A Word from Our Sponsor: Admen, Advertising, and the Golden Age of Radio (2014) Mazzarella, William. Shoveling smoke: Advertising and globalization in contemporary India (Duke University Press, 2003) Moriarty, Sandra, et al. Advertising: Principles and practice (Pearson Australia, 2014), Australian perspectives Nevett, Terence R. Advertising in Britain: a history (1982) Oram, Hugh. The advertising book: The history of advertising in Ireland (MOL Books, 1986) Presbrey, Frank. "The history and development of advertising." Advertising & Society Review (2000) 1#1 online Saunders, Thomas J. "Selling under the Swastika: Advertising and Commercial Culture in Nazi Germany." German History (2014): ghu058. Short, John Phillip. "Advertising Empire: Race and Visual Culture in Imperial Germany." Enterprise and Society (2014): khu013. Sivulka, Juliann. Soap, sex, and cigarettes: A cultural history of American advertising (Cengage Learning, 2011) Spring, Dawn. "The Globalization of American Advertising and Brand Management: A Brief History of the J. Walter Thompson Company, Proctor and Gamble, and US Foreign Policy." Global Studies Journal (2013). 5#4 Stephenson, Harry Edward, and Carlton McNaught. The Story of Advertising in Canada: A Chronicle of Fifty Years (Ryerson Press, 1940) Tungate, Mark. Adland: a global history of advertising (Kogan Page Publishers, 2007.) West, Darrell M. Air Wars: Television Advertising and Social Media in Election Campaigns, 1952–2012 (Sage, 2013) External links Hartman Center for Sales, Advertising & Marketing History at Duke University Duke University Libraries Digital Collections: Ad*Access, over 7,000 U.S. and Canadian advertisements, dated 1911–1955, includes World War II propaganda. Emergence of Advertising in America, 9,000 advertising items and publications dating from 1850 to 1940, illustrating the rise of consumer culture and the birth of a professionalized advertising industry in the United States. AdViews, vintage television commercials ROAD 2.0, 30,000 outdoor advertising images Medicine & Madison Avenue, documents advertising of medical and pharmaceutical products Art & Copy, a 2009 documentary film about the advertising industry Articles containing video clips Communication design Promotion and marketing communications Business models
Advertising
[ "Engineering" ]
12,552
[ "Design", "Communication design" ]
2,862
https://en.wikipedia.org/wiki/AI-complete
In the field of artificial intelligence (AI), tasks that are hypothesized to require artificial general intelligence to solve are informally known as AI-complete or AI-hard. Calling a problem AI-complete reflects the belief that it cannot be solved by a simple specific algorithm. In the past, problems supposed to be AI-complete included computer vision, natural language understanding, and dealing with unexpected circumstances while solving any real-world problem. AI-complete were notably considered useful for testing the presence of humans, as CAPTCHAs aim to do, and in computer security to circumvent brute-force attacks. History The term was coined by Fanya Montalvo by analogy with NP-complete and NP-hard in complexity theory, which formally describes the most famous class of difficult problems. Early uses of the term are in Erik Mueller's 1987 PhD dissertation and in Eric Raymond's 1991 Jargon File. Expert systems, that were popular in the 1980s, were able to solve very simple and/or restricted versions of AI-complete problems, but never in their full generality. When AI researchers attempted to "scale up" their systems to handle more complicated, real-world situations, the programs tended to become excessively brittle without commonsense knowledge or a rudimentary understanding of the situation: they would fail as unexpected circumstances outside of its original problem context would begin to appear. When human beings are dealing with new situations in the world, they are helped by their awareness of the general context: they know what the things around them are, why they are there, what they are likely to do and so on. They can recognize unusual situations and adjust accordingly. Expert systems lacked this adaptability and were brittle when facing new situations. DeepMind published a work in May 2022 in which they trained a single model to do several things at the same time. The model, named Gato, can "play Atari, caption images, chat, stack blocks with a real robot arm and much more, deciding based on its context whether to output text, joint torques, button presses, or other tokens." Similarly, some tasks once considered to be AI-complete, like machine translation, are among the capabilities of large language models. AI-complete problems AI-complete problems have been hypothesized to include: AI peer review (composite natural language understanding, automated reasoning, automated theorem proving, formalized logic expert system) Bongard problems Computer vision (and subproblems such as object recognition) Natural language understanding (and subproblems such as text mining, machine translation, and word-sense disambiguation) Autonomous driving Dealing with unexpected circumstances while solving any real world problem, whether navigation, planning, or even the kind of reasoning done by expert systems. Formalization Computational complexity theory deals with the relative computational difficulty of computable functions. By definition, it does not cover problems whose solution is unknown or has not been characterized formally. Since many AI problems have no formalization yet, conventional complexity theory does not enable a formal definition of AI-completeness. Research Roman Yampolskiy suggests that a problem is AI-Complete if it has two properties: It is in the set of AI problems (Human Oracle-solvable). Any AI problem can be converted into by some polynomial time algorithm. On the other hand, a problem is AI-Hard if and only if there is an AI-Complete problem that is polynomial time Turing-reducible to . This also gives as a consequence the existence of AI-Easy problems, that are solvable in polynomial time by a deterministic Turing machine with an oracle for some problem. Yampolskiy has also hypothesized that the Turing Test is a defining feature of AI-completeness. Groppe and Jain classify problems which require artificial general intelligence to reach human-level machine performance as AI-complete, while only restricted versions of AI-complete problems can be solved by the current AI systems. For Šekrst, getting a polynomial solution to AI-complete problems would not necessarily be equal to solving the issue of artificial general intelligence, while emphasizing the lack of computational complexity research being the limiting factor towards achieving artificial general intelligence. For Kwee-Bintoro and Velez, solving AI-complete problems would have strong repercussions on the society. See also ASR-complete List of unsolved problems in computer science Synthetic intelligence References Artificial intelligence Computational problems
AI-complete
[ "Mathematics" ]
908
[ "Mathematical problems", "Computational problems" ]
2,864
https://en.wikipedia.org/wiki/Archaeoastronomy
Archaeoastronomy (also spelled archeoastronomy) is the interdisciplinary or multidisciplinary study of how people in the past "have understood the phenomena in the sky, how they used these phenomena and what role the sky played in their cultures". Clive Ruggles argues it is misleading to consider archaeoastronomy to be the study of ancient astronomy, as modern astronomy is a scientific discipline, while archaeoastronomy considers symbolically rich cultural interpretations of phenomena in the sky by other cultures. It is often twinned with ethnoastronomy, the anthropological study of skywatching in contemporary societies. Archaeoastronomy is also closely associated with historical astronomy, the use of historical records of heavenly events to answer astronomical problems and the history of astronomy, which uses written records to evaluate past astronomical practice. Archaeoastronomy uses a variety of methods to uncover evidence of past practices including archaeology, anthropology, astronomy, statistics and probability, and history. Because these methods are diverse and use data from such different sources, integrating them into a coherent argument has been a long-term difficulty for archaeoastronomers. Archaeoastronomy fills complementary niches in landscape archaeology and cognitive archaeology. Material evidence and its connection to the sky can reveal how a wider landscape can be integrated into beliefs about the cycles of nature, such as Mayan astronomy and its relationship with agriculture. Other examples which have brought together ideas of cognition and landscape include studies of the cosmic order embedded in the roads of settlements. Archaeoastronomy can be applied to all cultures and all time periods. The meanings of the sky vary from culture to culture; nevertheless there are scientific methods which can be applied across cultures when examining ancient beliefs. It is perhaps the need to balance the social and scientific aspects of archaeoastronomy which led Clive Ruggles to describe it as "a field with academic work of high quality at one end but uncontrolled speculation bordering on lunacy at the other". History Two hundred years before John Michell wrote the above, there were no archaeoastronomers and there were no professional archaeologists, but there were astronomers and antiquarians. Some of their works are considered precursors of archaeoastronomy; antiquarians interpreted the astronomical orientation of the ruins that dotted the English countryside as William Stukeley did of Stonehenge in 1740, while John Aubrey in 1678 and Henry Chauncy in 1700 sought similar astronomical principles underlying the orientation of churches. Late in the nineteenth century astronomers such as Richard Proctor and Charles Piazzi Smyth investigated the astronomical orientations of the pyramids. The term archaeoastronomy was advanced by Elizabeth Chesley Baity (following the suggestion of Euan MacKie) in 1973, but as a topic of study it may be much older, depending on how archaeoastronomy is defined. Clive Ruggles says that Heinrich Nissen, working in the mid-nineteenth century was arguably the first archaeoastronomer. Rolf Sinclair says that Norman Lockyer, working in the late 19th and early 20th centuries, could be called the 'father of archaeoastronomy'. Euan MacKie would place the origin even later, stating: "...the genesis and modern flowering of archaeoastronomy must surely lie in the work of Alexander Thom in Britain between the 1930s and the 1970s". In the 1960s the work of the engineer Alexander Thom and that of the astronomer Gerald Hawkins, who proposed that Stonehenge was a Neolithic computer, inspired new interest in the astronomical features of ancient sites. The claims of Hawkins were largely dismissed, but this was not the case for Alexander Thom's work, whose survey results of megalithic sites hypothesized widespread practice of accurate astronomy in the British Isles. Euan MacKie, recognizing that Thom's theories needed to be tested, excavated at the Kintraw standing stone site in Argyllshire in 1970 and 1971 to check whether the latter's prediction of an observation platform on the hill slope above the stone was correct. There was an artificial platform there and this apparent verification of Thom's long alignment hypothesis (Kintraw was diagnosed as an accurate winter solstice site) led him to check Thom's geometrical theories at the Cultoon stone circle in Islay, also with a positive result. MacKie therefore broadly accepted Thom's conclusions and published new prehistories of Britain. In contrast a re-evaluation of Thom's fieldwork by Clive Ruggles argued that Thom's claims of high accuracy astronomy were not fully supported by the evidence. Nevertheless, Thom's legacy remains strong, Edwin C. Krupp wrote in 1979, "Almost singlehandedly he has established the standards for archaeo-astronomical fieldwork and interpretation, and his amazing results have stirred controversy during the last three decades." His influence endures and practice of statistical testing of data remains one of the methods of archaeoastronomy. The approach in the New World, where anthropologists began to consider more fully the role of astronomy in Amerindian civilizations, was markedly different. They had access to sources that the prehistory of Europe lacks such as ethnographies and the historical records of the early colonizers. Following the pioneering example of Anthony Aveni, this allowed New World archaeoastronomers to make claims for motives which in the Old World would have been mere speculation. The concentration on historical data led to some claims of high accuracy that were comparatively weak when compared to the statistically led investigations in Europe. This came to a head at a meeting sponsored by the International Astronomical Union (IAU) in Oxford in 1981. The methodologies and research questions of the participants were considered so different that the conference proceedings were published as two volumes. Nevertheless, the conference was considered a success in bringing researchers together and Oxford conferences have continued every four or five years at locations around the world. The subsequent conferences have resulted in a move to more interdisciplinary approaches with researchers aiming to combine the contextuality of archaeological research, which broadly describes the state of archaeoastronomy today, rather than merely establishing the existence of ancient astronomies, archaeoastronomers seek to explain why people would have an interest in the night sky. Relations to other disciplines Archaeoastronomy has long been seen as an interdisciplinary field that uses written and unwritten evidence to study the astronomies of other cultures. As such, it can be seen as connecting other disciplinary approaches for investigating ancient astronomy: astroarchaeology (an obsolete term for studies that draw astronomical information from the alignments of ancient architecture and landscapes), history of astronomy (which deals primarily with the written textual evidence), and ethnoastronomy (which draws on the ethnohistorical record and contemporary ethnographic studies). Reflecting Archaeoastronomy's development as an interdisciplinary subject, research in the field is conducted by investigators trained in a wide range of disciplines. Authors of recent doctoral dissertations have described their work as concerned with the fields of archaeology and cultural anthropology; with various fields of history including the history of specific regions and periods, the history of science and the history of religion; and with the relation of astronomy to art, literature and religion. Only rarely did they describe their work as astronomical, and then only as a secondary category. Both practicing archaeoastronomers and observers of the discipline approach it from different perspectives. Other researchers relate archaeoastronomy to the history of science, either as it relates to a culture's observations of nature and the conceptual framework they devised to impose an order on those observations or as it relates to the political motives which drove particular historical actors to deploy certain astronomical concepts or techniques. Art historian Richard Poss took a more flexible approach, maintaining that the astronomical rock art of the North American Southwest should be read employing "the hermeneutic traditions of western art history and art criticism" Astronomers, however, raise different questions, seeking to provide their students with identifiable precursors of their discipline, and are especially concerned with the important question of how to confirm that specific sites are, indeed, intentionally astronomical. The reactions of professional archaeologists to archaeoastronomy have been decidedly mixed. Some expressed incomprehension or even hostility, varying from a rejection by the archaeological mainstream of what they saw as an archaeoastronomical fringe to an incomprehension between the cultural focus of archaeologists and the quantitative focus of early archaeoastronomers. Yet archaeologists have increasingly come to incorporate many of the insights from archaeoastronomy into archaeology textbooks and, as mentioned above, some students wrote archaeology dissertations on archaeoastronomical topics. Since archaeoastronomers disagree so widely on the characterization of the discipline, they even dispute its name. All three major international scholarly associations relate archaeoastronomy to the study of culture, using the term Astronomy in Culture or a translation. Michael Hoskin sees an important part of the discipline as fact-collecting, rather than theorizing, and proposed to label this aspect of the discipline Archaeotopography. Ruggles and Saunders proposed Cultural Astronomy as a unifying term for the various methods of studying folk astronomies. Others have argued that astronomy is an inaccurate term, what are being studied are cosmologies and people who object to the use of logos have suggested adopting the Spanish cosmovisión. When debates polarise between techniques, the methods are often referred to by a colour code, based on the colours of the bindings of the two volumes from the first Oxford Conference, where the approaches were first distinguished. Green (Old World) archaeoastronomers rely heavily on statistics and are sometimes accused of missing the cultural context of what is a social practice. Brown (New World) archaeoastronomers in contrast have abundant ethnographic and historical evidence and have been described as 'cavalier' on matters of measurement and statistical analysis. Finding a way to integrate various approaches has been a subject of much discussion since the early 1990s. Methodology There is no one way to do archaeoastronomy. The divisions between archaeoastronomers tend not to be between the physical scientists and the social scientists. Instead, it tends to depend on the location and/or kind of data available to the researcher. In the Old World, there is little data but the sites themselves; in the New World, the sites were supplemented by ethnographic and historic data. The effects of the isolated development of archaeoastronomy in different places can still often be seen in research today. Research methods can be classified as falling into one of two approaches, though more recent projects often use techniques from both categories. Green archaeoastronomy Green archaeoastronomy is named after the cover of the book Archaeoastronomy in the Old World. It is based primarily on statistics and is particularly apt for prehistoric sites where the social evidence is relatively scant compared to the historic period. The basic methods were developed by Alexander Thom during his extensive surveys of British megalithic sites. Thom wished to examine whether or not prehistoric peoples used high-accuracy astronomy. He believed that by using horizon astronomy, observers could make estimates of dates in the year to a specific day. The observation required finding a place where on a specific date the Sun set into a notch on the horizon. A common theme is a mountain that blocked the Sun, but on the right day would allow the tiniest fraction to re-emerge on the other side for a 'double sunset'. The animation below shows two sunsets at a hypothetical site, one the day before the summer solstice and one at the summer solstice, which has a double sunset. To test this idea he surveyed hundreds of stone rows and circles. Any individual alignment could indicate a direction by chance, but he planned to show that together the distribution of alignments was non-random, showing that there was an astronomical intent to the orientation of at least some of the alignments. His results indicated the existence of eight, sixteen, or perhaps even thirty-two approximately equal divisions of the year. The two solstices, the two equinoxes and four cross-quarter days, days halfway between a solstice and the equinox were associated with the medieval Celtic calendar. While not all these conclusions have been accepted, it has had an enduring influence on archaeoastronomy, especially in Europe. Euan MacKie has supported Thom's analysis, to which he added an archaeological context by comparing Neolithic Britain to the Mayan civilization to argue for a stratified society in this period. To test his ideas he conducted a couple of excavations at proposed prehistoric observatories in Scotland. Kintraw is a site notable for its four-meter high standing stone. Thom proposed that this was a foresight to a point on the distant horizon between Beinn Shianaidh and Beinn o'Chaolias on Jura. This, Thom argued, was a notch on the horizon where a double sunset would occur at midwinter. However, from ground level, this sunset would be obscured by a ridge in the landscape, and the viewer would need to be raised by two meters: another observation platform was needed. This was identified across a gorge where a platform was formed from small stones. The lack of artifacts caused concern for some archaeologists and the petrofabric analysis was inconclusive, but further research at Maes Howe and on the Bush Barrow Lozenge led MacKie to conclude that while the term 'science' may be anachronistic, Thom was broadly correct upon the subject of high-accuracy alignments. In contrast Clive Ruggles has argued that there are problems with the selection of data in Thom's surveys. Others have noted that the accuracy of horizon astronomy is limited by variations in refraction near the horizon. A deeper criticism of Green archaeoastronomy is that while it can answer whether there was likely to be an interest in astronomy in past times, its lack of a social element means that it struggles to answer why people would be interested, which makes it of limited use to people asking questions about the society of the past. Keith Kintigh wrote: "To put it bluntly, in many cases it doesn't matter much to the progress of anthropology whether a particular archaeoastronomical claim is right or wrong because the information doesn't inform the current interpretive questions." Nonetheless, the study of alignments remains a staple of archaeoastronomical research, especially in Europe. Brown archaeoastronomy In contrast to the largely alignment-oriented statistically led methods of green archaeoastronomy, brown archaeoastronomy has been identified as being closer to the history of astronomy or to cultural history, insofar as it draws on historical and ethnographic records to enrich its understanding of early astronomies and their relations to calendars and ritual. The many records of native customs and beliefs made by Spanish chroniclers and ethnographic researchers means that brown archaeoastronomy is often associated with studies of astronomy in the Americas. One famous site where historical records have been used to interpret sites is Chichen Itza. Rather than analyzing the site and seeing which targets appear popular, archaeoastronomers have instead examined the ethnographic records to see what features of the sky were important to the Mayans and then sought archaeological correlates. One example which could have been overlooked without historical records is the Mayan interest in the planet Venus. This interest is attested to by the Dresden codex which contains tables with information about Venus's appearances in the sky. These cycles would have been of astrological and ritual significance as Venus was associated with Quetzalcoatl or Xolotl. Associations of architectural features with settings of Venus can be found in Chichen Itza, Uxmal, and probably some other Mesoamerican sites. The Temple of the Warriors bears iconography depicting feathered serpents associated with Quetzalcoatl or Kukulcan. This means that the building's alignment towards the place on the horizon where Venus first appears in the evening sky (when it coincides with the rainy season) may be meaningful. However, since both the date and the azimuth of this event change continuously, a solar interpretation of this orientation is much more likely. Anthony Aveni claims that another building associated with the planet Venus in the form of Kukulcan, and the rainy season at Chichen Itza is the Caracol. This is a building with a circular tower and doors facing the cardinal directions. The base faces the most northerly setting of Venus. Additionally the pillars of a stylobate on the building's upper platform were painted black and red. These are colours associated with Venus as an evening and morning star. However the windows in the tower seem to have been little more than slots, making them poor at letting light in, but providing a suitable place to view out. In their discussion of the credibility of archaeoastronomical sites, Cotte and Ruggles considered the interpretation that the Caracol is an observatory site was debated among specialists, meeting the second of their four levels of site credibility. Aveni states that one of the strengths of the brown methodology is that it can explore astronomies invisible to statistical analysis and offers the astronomy of the Incas as another example. The empire of the Incas was conceptually divided using ceques, radial routes emanating from the capital at Cusco. Thus there are alignments in all directions which would suggest there is little of astronomical significance, However, ethnohistorical records show that the various directions do have cosmological and astronomical significance with various points in the landscape being significant at different times of the year. In eastern Asia archaeoastronomy has developed from the history of astronomy and much archaeoastronomy is searching for material correlates of the historical record. This is due to the rich historical record of astronomical phenomena which, in China, stretches back into the Han dynasty, in the second century BC. A criticism of this method is that it can be statistically weak. Schaefer in particular has questioned how robust the claimed alignments in the Caracol are. Because of the wide variety of evidence, which can include artefacts as well as sites, there is no one way to practice archaeoastronomy. Despite this it is accepted that archaeoastronomy is not a discipline that sits in isolation. Because archaeoastronomy is an interdisciplinary field, whatever is being investigated should make sense both archaeologically and astronomically. Studies are more likely to be considered sound if they use theoretical tools found in archaeology like analogy and homology and if they can demonstrate an understanding of accuracy and precision found in astronomy. Both quantitative analyses and interpretations based on ethnographic analogies and other contextual evidence have recently been applied in systematic studies of architectural orientations in the Maya area and in other parts of Mesoamerica. Source materials Because archaeoastronomy is about the many and various ways people interacted with the sky, there are a diverse range of sources giving information about astronomical practices. Alignments A common source of data for archaeoastronomy is the study of alignments. This is based on the assumption that the axis of alignment of an archaeological site is meaningfully oriented towards an astronomical target. Brown archaeoastronomers may justify this assumption through reading historical or ethnographic sources, while green archaeoastronomers tend to prove that alignments are unlikely to be selected by chance, usually by demonstrating common patterns of alignment at multiple sites. An alignment is calculated by measuring the azimuth, the angle from north, of the structure and the altitude of the horizon it faces The azimuth is usually measured using a theodolite or a compass. A compass is easier to use, though the deviation of the Earth's magnetic field from true north, known as its magnetic declination must be taken into account. Compasses are also unreliable in areas prone to magnetic interference, such as sites being supported by scaffolding. Additionally a compass can only measure the azimuth to a precision of a half a degree. A theodolite can be considerably more accurate if used correctly, but it is also considerably more difficult to use correctly. There is no inherent way to align a theodolite with North and so the scale has to be calibrated using astronomical observation, usually the position of the Sun. Because the position of celestial bodies changes with the time of day due to the Earth's rotation, the time of these calibration observations must be accurately known, or else there will be a systematic error in the measurements. Horizon altitudes can be measured with a theodolite or a clinometer. Artifacts For artifacts such as the Sky Disc of Nebra, alleged to be a Bronze Age artefact depicting the cosmos, the analysis would be similar to typical post-excavation analysis as used in other sub-disciplines in archaeology. An artefact is examined and attempts are made to draw analogies with historical or ethnographical records of other peoples. The more parallels that can be found, the more likely an explanation is to be accepted by other archaeologists. A more mundane example is the presence of astrological symbols found on some shoes and sandals from the Roman Empire. The use of shoes and sandals is well known, but Carol van Driel-Murray has proposed that astrological symbols etched onto sandals gave the footwear spiritual or medicinal meanings. This is supported through citation of other known uses of astrological symbols and their connection to medical practice and with the historical records of the time. Another well-known artefact with an astronomical use is the Antikythera mechanism. In this case analysis of the artefact, and reference to the description of similar devices described by Cicero, would indicate a plausible use for the device. The argument is bolstered by the presence of symbols on the mechanism, allowing the disc to be read. Art and inscriptions Art and inscriptions may not be confined to artefacts, but also appear painted or inscribed on an archaeological site. Sometimes inscriptions are helpful enough to give instructions to a site's use. For example, a Greek inscription on a stele (from Itanos) has been translated as:"Patron set this up for Zeus Epopsios. Winter solstice. Should anyone wish to know: off 'the little pig' and the stele the sun turns." From Mesoamerica come Mayan and Aztec codices. These are folding books made from Amatl, processed tree bark on which are glyphs in Mayan or Aztec script. The Dresden codex contains information regarding the Venus cycle, confirming its importance to the Mayans. More problematic are those cases where the movement of the Sun at different times and seasons causes light and shadow interactions with petroglyphs. A widely known example is the Sun Dagger of Fajada Butte at which a glint of sunlight passes over a spiral petroglyph. The location of a dagger of light on the petroglyph varies throughout the year. At the summer solstice a dagger can be seen through the heart of the spiral; at the winter solstice two daggers appear to either side of it. It is proposed that this petroglyph was created to mark these events. Recent studies have identified many similar sites in the US Southwest and Northwestern Mexico. It has been argued that the number of solstitial markers at these sites provides statistical evidence that they were intended to mark the solstices. The Sun Dagger site on Fajada Butte in Chaco Canyon, New Mexico, stands out for its explicit light markings that record all the key events of both the solar and lunar cycles: summer solstice, winter solstice, equinox, and the major and minor lunar standstills of the Moon's 18.6 year cycle. In addition at two other sites on Fajada Butte, there are five light markings on petroglyphs recording the summer and winter solstices, equinox and solar noon. Numerous buildings and interbuilding alignments of the great houses of Chaco Canyon and outlying areas are oriented to the same solar and lunar directions that are marked at the Sun Dagger site. If no ethnographic nor historical data are found which can support this assertion then acceptance of the idea relies upon whether or not there are enough petroglyph sites in North America that such a correlation could occur by chance. It is helpful when petroglyphs are associated with existing peoples. This allows ethnoastronomers to question informants as to the meaning of such symbols. Ethnographies As well as the materials left by peoples themselves, there are also the reports of other who have encountered them. The historical records of the Conquistadores are a rich source of information about the pre-Columbian Americans. Ethnographers also provide material about many other peoples. Anthony Aveni uses the importance of zenith passages as an example of the importance of ethnography. For peoples living between the tropics of Cancer and Capricorn there are two days of the year when the noon Sun passes directly overhead and casts no shadow. In parts of Mesoamerica this was considered a significant day as it would herald the arrival of rains, and so play a part in the cycle of agriculture. This knowledge is still considered important amongst Mayan Indians living in Central America today. The ethnographic records suggested to archaeoastronomers that this day may have been important to the ancient Mayans. There are also shafts known as 'zenith tubes' which illuminate subterranean rooms when the Sun passes overhead found at places like Monte Albán and Xochicalco. It is only through the ethnography that we can speculate that the timing of the illumination was considered important in Mayan society. Alignments to the sunrise and sunset on the day of the zenith passage have been claimed to exist at several sites. However, it has been shown that, since there are very few orientations that can be related to these phenomena, they likely have different explanations. Ethnographies also caution against over-interpretation of sites. At a site in Chaco Canyon can be found a pictograph with a star, crescent and hand. It has been argued by some astronomers that this is a record of the 1054 Supernova. However recent reexaminations of related 'supernova petroglyphs' raises questions about such sites in general. Cotte and Ruggles used the Supernova petroglyph as an example of a completely refuted site and anthropological evidence suggests other interpretations. The Zuni people, who claim a strong ancestral affiliation with Chaco, marked their sun-watching station with a crescent, star, hand and sundisc, similar to those found at the Chaco site. Ethnoastronomy is also an important field outside of the Americas. For example, anthropological work with Aboriginal Australians is producing much information about their Indigenous astronomies and about their interaction with the modern world. Recreating the ancient sky Once the researcher has data to test, it is often necessary to attempt to recreate ancient sky conditions to place the data in its historical environment. Declination To calculate what astronomical features a structure faced a coordinate system is needed. The stars provide such a system. On a clear night observe the stars spinning around the celestial pole can be observed. This point is +90° of the North Celestial Pole or −90° observing the Southern Celestial Pole. The concentric circles the stars trace out are lines of celestial latitude, known as declination. The arc connecting the points on the horizon due East and due West (if the horizon is flat) and all points midway between the Celestial Poles is the Celestial Equator which has a declination of 0°. The visible declinations vary depending where you are on the globe. Only an observer on the North Pole of Earth would be unable to see any stars from the Southern Celestial Hemisphere at night (see diagram below). Once a declination has been found for the point on the horizon that a building faces it is then possible to say whether a specific body can be seen in that direction. Solar positioning While the stars are fixed to their declinations the Sun is not. The rising point of the Sun varies throughout the year. It swings between two limits marked by the solstices a bit like a pendulum, slowing as it reaches the extremes, but passing rapidly through the midpoint. If an archaeoastronomer can calculate from the azimuth and horizon height that a site was built to view a declination of +23.5° then he or she need not wait until 21 June to confirm the site does indeed face the summer solstice. For more information see History of solar observation. Lunar positioning The Moon's appearance is considerably more complex. Its motion, like the Sun, is between two limits—known as lunistices rather than solstices. However, its travel between lunistices is considerably faster. It takes a sidereal month to complete its cycle rather than the year-long trek of the Sun. This is further complicated as the lunistices marking the limits of the Moon's movement move on an 18.6 year cycle. For slightly over nine years the extreme limits of the Moon are outside the range of sunrise. For the remaining half of the cycle the Moon never exceeds the limits of the range of sunrise. However, much lunar observation was concerned with the phase of the Moon. The cycle from one New Moon to the next runs on an entirely different cycle, the Synodic month. Thus when examining sites for lunar significance the data can appear sparse due to the extremely variable nature of the Moon. See Moon for more details. Stellar positioning Finally there is often a need to correct for the apparent movement of the stars. On the timescale of human civilisation the stars have largely maintained the same position relative to each other. Each night they appear to rotate around the celestial poles due to the Earth's rotation about its axis. However, the Earth spins rather like a spinning top. Not only does the Earth rotate, it wobbles. The Earth's axis takes around 25,800 years to complete one full wobble. The effect to the archaeoastronomer is that stars did not rise over the horizon in the past in the same places as they do today. Nor did the stars rotate around Polaris as they do now. The movement of the Earth's axis was already noticed by the Sumerians over six thousand years ago, when they were able to observe the star Canopus culminating directly above the horizon on the southern meridian for the first time in their oldest and southernmost city Eridu. For several decades, Canopus was not yet visible in the neighbouring town of Ur to the north-east of Eridu, and therefore, it was called the "Star of the City of Eridu" in Sumerian. In the case of the Egyptian pyramids, it has been shown they were aligned towards Thuban, a faint star in the constellation of Draco. The effect can be substantial over relatively short lengths of time, historically speaking. For instance a person born on 25 December in Roman times would have been born with the Sun in the constellation Capricorn. In the modern period a person born on the same date would have the Sun in Sagittarius due to the precession of the equinoxes. Transient phenomena Additionally there are often transient phenomena, events which do not happen on an annual cycle. Most predictable are events like eclipses. In the case of solar eclipses these can be used to date events in the past. A solar eclipse mentioned by Herodotus enables us to date a battle between the Medes and the Lydians, which following the eclipse failed to happen, to 28 May, 585 BC. Some comets are predictable, most famously Halley's Comet. Yet as a class of object they remain unpredictable and can appear at any time. Some have extremely lengthy orbital periods which means their past appearances and returns cannot be predicted. Others may have only ever passed through the Solar System once and so are inherently unpredictable. Meteor showers should be predictable, but some meteors are cometary debris and so require calculations of orbits which are currently impossible to complete. Other events noted by ancients include aurorae, sun dogs and rainbows all of which are as impossible to predict as the ancient weather, but nevertheless may have been considered important phenomena. Major topics of archaeoastronomical research The use of calendars A common justification for the need for astronomy is the need to develop an accurate calendar for agricultural reasons. Ancient texts like Hesiod's Works and Days, an ancient farming manual, would appear to partially confirm this: astronomical observations are used in combination with ecological signs, such as bird migrations to determine the seasons. Ethnoastronomical studies of the Hopi of the southwestern United States indicate that they carefully observed the rising and setting positions of the Sun to determine the proper times to plant crops. However, ethnoastronomical work with the Mursi of Ethiopia shows that their luni-solar calendar was somewhat haphazard, indicating the limits of astronomical calendars in some societies. All the same, calendars appear to be an almost universal phenomenon in societies as they provide tools for the regulation of communal activities. One such example is the Tzolk'in calendar of 260 days. Together with the 365-day year, it was used in pre-Columbian Mesoamerica, forming part of a comprehensive calendrical system, which combined a series of astronomical observations and ritual cycles. Archaeoastronomical studies throughout Mesoamerica have shown that the orientations of most structures refer to the Sun and were used in combination with the 260-day cycle for scheduling agricultural activities and the accompanying rituals. The distribution of dates and intervals marked by orientations of monumental ceremonial complexes in the area along the southern Gulf Coast in Mexico, dated to about 1100 to 700 BCE, represents the earliest evidence of the use of this cycle. Other peculiar calendars include ancient Greek calendars. These were nominally lunar, starting with the New Moon. In reality the calendar could pause or skip days with confused citizens inscribing dates by both the civic calendar and ton theoi, by the moon. The lack of any universal calendar for ancient Greece suggests that coordination of panhellenic events such as games or rituals could be difficult and that astronomical symbolism may have been used as a politically neutral form of timekeeping. Orientation measurements in Greek temples and Byzantine churches have been associated to deity's name day, festivities, and special events. Myth and cosmology Another motive for studying the sky is to understand and explain the universe. In these cultures myth was a tool for achieving this, and the explanations, while not reflecting the standards of modern science, are cosmologies. The Incas arranged their empire to demonstrate their cosmology. The capital, Cusco, was at the centre of the empire and connected to it by means of ceques, conceptually straight lines radiating out from the centre. These ceques connected the centre of the empire to the four suyus, which were regions defined by their direction from Cusco. The notion of a quartered cosmos is common across the Andes. Gary Urton, who has conducted fieldwork in the Andean villagers of Misminay, has connected this quartering with the appearance of the Milky Way in the night sky. In one season it will bisect the sky and in another bisect it in a perpendicular fashion. The importance of observing cosmological factors is also seen on the other side of the world. The Forbidden City in Beijing is laid out to follow cosmic order though rather than observing four directions. The Chinese system was composed of five directions: North, South, East, West and Centre. The Forbidden City occupied the centre of ancient Beijing. One approaches the Emperor from the south, thus placing him in front of the circumpolar stars. This creates the situation of the heavens revolving around the person of the Emperor. The Chinese cosmology is now better known through its export as feng shui. There is also much information about how the universe was thought to work stored in the mythology of the constellations. The Barasana of the Amazon plan part of their annual cycle based on observation of the stars. When their constellation of the Caterpillar-Jaguar (roughly equivalent to the modern Scorpius) falls they prepare to catch the pupating caterpillars of the forest as they fall from the trees. The caterpillars provide food at a season when other foods are scarce. A more well-known source of constellation myth are the texts of the Greeks and Romans. The origin of their constellations remains a matter of vigorous and occasionally fractious debate. The loss of one of the sisters, Merope, in some Greek myths may reflect an astronomical event wherein one of the stars in the Pleiades disappeared from view by the naked eye. Giorgio de Santillana, professor of the History of Science in the School of Humanities at the Massachusetts Institute of Technology, and Hertha von Dechend, professor at Goethe University Frankfurt, argued that the old mythological stories handed down from antiquity were not random fictitious tales but were accurate depictions of celestial cosmology clothed in tales to aid their oral transmission. The chaos, monsters and violence in ancient myths are representative of the forces that shape each age. They argued that ancient myths are the remains of preliterate, late Neolithic astronomy that was lost. Santillana and von Dechend argued in their book Hamlet's Mill: An Essay on Myth and the Frame of Time (1969) that ancient myths have no historical or factual basis other than a cosmological one encoding astronomical phenomena, especially the precession of the equinoxes. Santillana and von Dechend's approach is not widely accepted. Displays of power By including celestial motifs in clothing it becomes possible for the wearer to make claims the power on Earth is drawn from above. It has been said that the Shield of Achilles described by Homer is also a catalogue of constellations. In North America shields depicted in Comanche petroglyphs appear to include Venus symbolism. Solsticial alignments also can be seen as displays of power. When viewed from a ceremonial plaza on the Island of the Sun (the mythical origin place of the Sun) in Lake Titicaca, the Sun was seen to rise at the June solstice between two towers on a nearby ridge. The sacred part of the island was separated from the remainder of it by a stone wall and ethnographic records indicate that access to the sacred space was restricted to members of the Inca ruling elite. Ordinary pilgrims stood on a platform outside the ceremonial area to see the solstice Sun rise between the towers. In Egypt the temple of Amun-Re at Karnak has been the subject of much study. Evaluation of the site, taking into account the change over time of the obliquity of the ecliptic show that the Great Temple was aligned on the rising of the midwinter Sun. The length of the corridor down which sunlight would travel would have limited illumination at other times of the year. In a later period the Serapeum of Alexandria was also said to have contained a solar alignment so that, on a specific sunrise, a shaft of light would pass across the lips of the statue of Serapis thus symbolising the Sun saluting the god. Major sites of archaeoastronomical interest Clive Ruggles and Michel Cotte recently edited a book on heritage sites of astronomy and archaeoastronomy which discussed a worldwide sample of astronomical and archaeoastronomical sites and provided criteria for the classification of archaeoastronomical sites. Newgrange Newgrange is a passage tomb in the Republic of Ireland dating from around 3,300 to 2,900 BC For a few days around the Winter Solstice light shines along the central passageway into the heart of the tomb. What makes this notable is not that light shines in the passageway, but that it does not do so through the main entrance. Instead it enters via a hollow box above the main doorway discovered by Michael O'Kelly. It is this roofbox which strongly indicates that the tomb was built with an astronomical aspect in mind. In their discussion of the credibility of archaeoastronomical sites, Cotte and Ruggles gave Newgrange as an example of a Generally accepted site, the highest of their four levels of credibility. Clive Ruggles notes: Egypt Since the first modern measurements of the precise cardinal orientations of the Giza pyramids by Flinders Petrie, various astronomical methods have been proposed for the original establishment of these orientations. It was recently proposed that this was done by observing the positions of two stars in the Plough / Big Dipper which was known to Egyptians as the thigh. It is thought that a vertical alignment between these two stars checked with a plumb bob was used to ascertain where north lay. The deviations from true north using this model reflect the accepted dates of construction. Some have argued that the pyramids were laid out as a map of the three stars in the belt of Orion, although this theory has been criticized by reputable astronomers. The site was instead probably governed by a spectacular hierophany which occurs at the summer solstice, when the Sun, viewed from the Sphinx terrace, forms—together with the two giant pyramids—the symbol Akhet, which was also the name of the Great Pyramid. Further, the south east corners of all the three pyramids align towards the temple of Heliopolis, as first discovered by the Egyptologist Mark Lehner. The astronomical ceiling of the tomb of Senenmut (BC) contains the Celestial Diagram depicting circumpolar constellations in the form of discs. Each disc is divided into 24 sections suggesting a 24-hour time period. Constellations are portrayed as sacred deities of Egypt. The observation of lunar cycles is also evident. El Castillo El Castillo, also known as Kukulcán's Pyramid, is a Mesoamerican step-pyramid built in the centre of Mayan center of Chichen Itza in Mexico. Several architectural features have suggested astronomical elements. Each of the stairways built into the sides of the pyramid has 91 steps. Along with the extra one for the platform at the top, this totals 365 steps, which is possibly one for each day of the year (365.25) or the number of lunar orbits in 10,000 rotations (365.01). A visually striking effect is seen every March and September as an unusual shadow occurs around the equinoxes. Light and shadow phenomena have been proposed to explain a possible architectural hierophany involving the sun at Chichén Itzá in a Maya Toltec structure dating to about 1000 CE. A shadow appears to descend the west balustrade of the northern stairway. The visual effect is of a serpent descending the stairway, with its head at the base in light. Additionally the western face points to sunset around 25 May, traditionally the date of transition from the dry to the rainy season. The intended alignment was, however, likely incorporated in the northern (main) facade of the temple, as it corresponds to sunsets on May 20 and July 24, recorded also by the central axis of Castillo at Tulum. The two dates are separated by 65 and 300 days, and it has been shown that the solar orientations in Mesoamerica regularly correspond to dates separated by calendrically significant intervals (multiples of 13 and 20 days). In their discussion of the credibility of archaeoastronomical sites, Cotte and Ruggles used the "equinox hierophany" at Chichén Itzá as an example of an Unproven site, the third of their four levels of credibility. Stonehenge Many astronomical alignments have been claimed for Stonehenge, a complex of megaliths and earthworks in the Salisbury Plain of England. The most famous of these is the midsummer alignment, where the Sun rises over the Heel Stone. However, this interpretation has been challenged by some archaeologists who argue that the midwinter alignment, where the viewer is outside Stonehenge and sees the Sun setting in the henge, is the more significant alignment, and the midsummer alignment may be a coincidence due to local topography. In their discussion of the credibility of archaeoastronomical sites, Cotte and Ruggles gave Stonehenge as an example of a Generally accepted site, the highest of their four levels of credibility. As well as solar alignments, there are proposed lunar alignments. The four station stones mark out a rectangle. The short sides point towards the midsummer sunrise and midwinter sunset. The long sides if viewed towards the south-east, face the most southerly rising of the Moon. Anthony Aveni notes that these lunar alignments have never gained the acceptance that the solar alignments have received. Maeshowe This is an architecturally outstanding Neolithic chambered tomb on the mainland of Orkney, Scotland—probably dating to the early 3rd millennium BC, and where the setting Sun at midwinter shines down the entrance passage into the central chamber (see Newgrange). In the 1990s further investigations were carried out to discover whether this was an accurate or an approximate solar alignment. Several new aspects of the site were discovered. In the first place the entrance passage faces the hills of the island Hoy, about 10 miles away. Secondly, it consists of two straight lengths, angled at a few degrees to each other. Thirdly, the outer part is aligned towards the midwinter sunset position on a level horizon just to the left of Ward Hill on Hoy. Fourthly the inner part points directly at the Barnhouse standing stone about 400m away and then to the right end of the summit of Ward Hill, just before it dips down to the notch between it at Cuilags to the right. This indicated line points to sunset on the first Sixteenths of the solar year (according to A. Thom) before and after the winter solstice and the notch at the base of the right slope of the Hill is at the same declination. Fourthly a similar 'double sunset' phenomenon is seen at the right end of Cuilags, also on Hoy; here the date is the first Eighth of the year before and after the winter solstice, at the beginning of November and February respectively—the Old Celtic festivals of Samhain and Imbolc. This alignment is not indicated by an artificial structure but gains plausibility from the other two indicated lines. Maeshowe is thus an extremely sophisticated calendar site which must have been positioned carefully in order to use the horizon foresights in the ways described. Uxmal Uxmal is a Mayan city in the Puuc Hills of Yucatán Peninsula, Mexico. The Governor's Palace at Uxmal is often used as an exemplar of why it is important to combine ethnographic and alignment data. The palace is aligned with an azimuth of 118° on the pyramid of Cehtzuc. This alignment corresponds approximately to the southernmost rising and, with a much greater precision, to the northernmost setting of Venus; both phenomena occur once every eight years. By itself this would not be sufficient to argue for a meaningful connection between the two events. The palace has to be aligned in one direction or another and why should the rising of Venus be any more important than the rising of the Sun, Moon, other planets, Sirius et cetera? The answer given is that not only does the palace point towards significant points of Venus, it is also covered in glyphs which stand for Venus and Mayan zodiacal constellations. Moreover, the great northerly extremes of Venus always occur in late April or early May, coinciding with the onset of the rainy season. The Venus glyphs placed in the cheeks of the Maya rain god Chac, most likely referring to the concomitance of these phenomena, support the west-working orientation scheme. Chaco Canyon In Chaco Canyon, the center of the ancient Pueblo culture in the American Southwest, numerous solar and lunar light markings and architectural and road alignments have been documented. These findings date to the 1977 discovery of the Sun Dagger site by Anna Sofaer. Three large stone slabs leaning against a cliff channel light and shadow markings onto two spiral petroglyphs on the cliff wall, marking the solstices, equinoxes and the lunar standstills of the 18.6 year cycle of the moon. Subsequent research by the Solstice Project and others demonstrated that numerous building and interbuilding alignments of the great houses of Chaco Canyon are oriented to solar, lunar and cardinal directions. In addition, research shows that the Great North Road, a thirty-five mile engineered "road", was constructed not for utilitarian purposes but rather to connect the ceremonial center of Chaco Canyon with the direction north. Lascaux Cave In recent years, new research has suggested that the Lascaux cave paintings in France may incorporate prehistoric star charts. Michael Rappenglueck of the University of Munich argues that some of the non-figurative dot clusters and dots within some of the figurative images correlate with the constellations of Taurus, the Pleiades and the grouping known as the "Summer Triangle". Based on her own study of the astronomical significance of Bronze Age petroglyphs in the Vallée des Merveilles and her extensive survey of other prehistoric cave painting sites in the region—most of which appear to have been selected because the interiors are illuminated by the setting Sun on the day of the winter solstice—French researcher Chantal Jègues-Wolkiewiez has further proposed that the gallery of figurative images in the Great Hall represents an extensive star map and that key points on major figures in the group correspond to stars in the main constellations as they appeared in the Paleolithic. Appliying phylogenetics to myths of the Cosmic Hunt, Julien d'Huy suggested that the palaeolithic version of this story could be the following: there is an animal that is a horned herbivore, especially an elk. One human pursues this ungulate. The hunt locates or gets to the sky. The animal is alive when it is transformed into a constellation. It forms the Big Dipper. This story may be represented in the famous Lascaux shaft 'scene' Fringe archaeoastronomy Archaeoastronomy owes something of a poor reputation among scholars due to its occasional misuse to advance a range of pseudo-historical accounts. During the 1930s, Otto S. Reuter compiled a study entitled Germanische Himmelskunde, or "Teutonic Skylore". The astronomical orientations of ancient monuments claimed by Reuter and his followers would place the ancient Germanic peoples ahead of the Ancient Near East in the field of astronomy, demonstrating the intellectual superiority of the "Aryans" (Indo-Europeans) over the Semites. More recently I. J. Gallagher, R. L. Pyle, and B. Fell interpreted inscriptions in West Virginia as a description in Celtic Ogham alphabet of the supposed winter solstitial marker at the site. The controversial translation was supposedly validated by a problematic archaeoastronomical indication in which the winter solstice Sun shone on an inscription of the Sun at the site. Subsequent analyses criticized its cultural inappropriateness, as well as its linguistic and archaeoastronomical claims, to describe it as an example of "cult archaeology". Archaeoastronomy is sometimes related to the fringe discipline of Archaeocryptography, when its followers attempt to find underlying mathematical orders beneath the proportions, size, and placement of archaeoastronomical sites such as Stonehenge and the Pyramid of Kukulcán at Chichen Itza. India Since the 19th century, numerous scholars have sought to use archaeoastronomical calculations to demonstrate the antiquity of Ancient Indian Vedic culture, computing the dates of astronomical observations ambiguously described in ancient poetry to as early as 4000 BC. David Pingree, a historian of Indian astronomy, condemned "the scholars who perpetrate wild theories of prehistoric science and call themselves archaeoastronomers". Organisations There are currently several academic organisations for scholars of archaeoastronomy (including ethnoastronomy and Indigenous astronomy). ISAACthe International Society for Archaeoastronomy and Astronomy in Culturewas founded in 1996 as the global society for the field. It sponsors the Oxford conferences and the Journal of Astronomy in Culture. SEAC – La Société Européenne pour l'Astronomie dans la Culture was founded in 1992 with a focus on broader Europe. SEAC holds annual conferences in Europe and publishes refereed conference proceedings on an annual basis. SIACLa Sociedad Interamericana de Astronomía en la Cultura was founded in 2003 with a focus on Latin America. SCAAS - The Society for Cultural Astronomy in the American Southwest was founded in 2009 as a regional organisation focusing on the astronomies of the native peoples of the Southwestern United States; it has since held seven meetings and workshops. AAAC – the Australian Association for Astronomy in Culture was founded in 2020 in Australia, focusing on Aboriginal and Torres Strait Islander astronomy. The Romanian Society for Cultural Astronomy was founded in 2019, holding an annual international conference and publishing the first monograph on archaeo- and ethnoastronomy in Romania (2019). SMART – the Society of Māori Astronomy Research and Traditions was founded in Aotearoa/New Zealand in 2013, focusing on Maori astronomy. Native Skywatchers was founded in 2007 in Minnesota, USA to promote Native American star knowledge, particularly of the Lakota and Ojibwe peoples of the northern US and Canada. Publications Additionally the Journal for the History of Astronomy publishes many archaeoastronomical papers. For twenty-seven volumes (from 1979 to 2002) it published an annual supplement Archaeoastronomy. The Journal of Astronomical History and Heritage, Culture & Cosmos, and the Journal of Skyscape Archaeology also publish papers on archaeoastronomy. Academic programs National projects and university programs including, or dedicated to, cultural astronomy are found globally. They include: The Sophia Centre for Cosmology in Culture at the University of Wales - Trinity Saint David in Lampeter, UK. The Cultural Astronomy Program at the University of Melbourne in Australia. The Tata Institute of Fundamental Research made interesting findings in this field. See also References Citations Bibliography   . reprinted in Michael H. Shank, ed., The Scientific Enterprise in Antiquity and the Middle Ages (Chicago: Univ. of Chicago Pr., 2000), pp. 30–39. Three volumes; 217 articles. Šprajc, Ivan (2015). Governor's Palace at Uxmal. In: Handbook of Archaeoastronomy and Ethnoastronomy, ed. by Clive L. N. Ruggles, New York: Springer, pp. 773–81 Šprajc, Ivan, and Pedro Francisco Sánchez Nava (2013). Astronomía en la arquitectura de Chichén Itzá: una reevaluación. Estudios de Cultura Maya XLI: 31–60. Further reading External links Astronomy before History - A chapter from The Cambridge Concise History of Astronomy, Michael Hoskin ed., 1999. Clive Ruggles: images, bibliography, software, and synopsis of his course at the University of Leicester. Traditions of the Sun – NASA and others exploring the world's ancient observatories. Ancient Observatories: Timeless Knowledge NASA Poster on ancient (and modern) observatories. Astronomy is the most ancient of the sciences. (About Kazakh folk astronomy) Ancient astronomy Astronomical sub-disciplines Archaeological sub-disciplines Traditional knowledge Articles containing video clips
Archaeoastronomy
[ "Astronomy" ]
11,590
[ "Ancient astronomy", "Archaeoastronomy", "Astronomical sub-disciplines", "History of astronomy" ]
2,866
https://en.wikipedia.org/wiki/Ammeter
An ammeter (abbreviation of ampere meter) is an instrument used to measure the current in a circuit. Electric currents are measured in amperes (A), hence the name. For direct measurement, the ammeter is connected in series with the circuit in which the current is to be measured. An ammeter usually has low resistance so that it does not cause a significant voltage drop in the circuit being measured. Instruments used to measure smaller currents, in the milliampere or microampere range, are designated as milliammeters or microammeters. Early ammeters were laboratory instruments that relied on the Earth's magnetic field for operation. By the late 19th century, improved instruments were designed which could be mounted in any position and allowed accurate measurements in electric power systems. It is generally represented by letter 'A' in a circuit. History The relation between electric current, magnetic fields and physical forces was first noted by Hans Christian Ørsted in 1820, who observed a compass needle was deflected from pointing North when a current flowed in an adjacent wire. The tangent galvanometer was used to measure currents using this effect, where the restoring force returning the pointer to the zero position was provided by the Earth's magnetic field. This made these instruments usable only when aligned with the Earth's field. Sensitivity of the instrument was increased by using additional turns of wire to multiply the effect – the instruments were called "multipliers". The word rheoscope as a detector of electrical currents was coined by Sir Charles Wheatstone about 1840 but is no longer used to describe electrical instruments. The word makeup is similar to that of rheostat (also coined by Wheatstone) which was a device used to adjust the current in a circuit. Rheostat is a historical term for a variable resistance, though unlike rheoscope may still be encountered. Types Some instruments are panel meters, meant to be mounted on some sort of control panel. Of these, the flat, horizontal or vertical type is often called an edgewise meter. Moving-coil The D'Arsonval galvanometer is a moving coil ammeter. It uses magnetic deflection, where current passing through a coil placed in the magnetic field of a permanent magnet causes the coil to move. The modern form of this instrument was developed by Edward Weston, and uses two spiral springs to provide the restoring force. The uniform air gap between the iron core and the permanent magnet poles make the deflection of the meter linearly proportional to current. These meters have linear scales. Basic meter movements can have full-scale deflection for currents from about 25 microamperes to 10 milliamperes. Because the magnetic field is polarised, the meter needle acts in opposite directions for each direction of current. A DC ammeter is thus sensitive to which polarity it is connected in; most are marked with a positive terminal, but some have centre-zero mechanisms and can display currents in either direction. A moving coil meter indicates the average (mean) of a varying current through it, which is zero for AC. For this reason, moving-coil meters are only usable directly for DC, not AC. This type of meter movement is extremely common for both ammeters and other meters derived from them, such as voltmeters and ohmmeters. Moving magnet Moving magnet ammeters operate on essentially the same principle as moving coil, except that the coil is mounted in the meter case, and a permanent magnet moves the needle. Moving magnet Ammeters are able to carry larger currents than moving coil instruments, often several tens of amperes, because the coil can be made of thicker wire and the current does not have to be carried by the hairsprings. Indeed, some Ammeters of this type do not have hairsprings at all, instead using a fixed permanent magnet to provide the restoring force. Electrodynamic An electrodynamic ammeter uses an electromagnet instead of the permanent magnet of the d'Arsonval movement. This instrument can respond to both alternating and direct current and also indicates true RMS for AC. See wattmeter for an alternative use for this instrument. Moving-iron Moving iron ammeters use a piece of iron which moves when acted upon by the electromagnetic force of a fixed coil of wire. The moving-iron meter was invented by Austrian engineer Friedrich Drexler in 1884. This type of meter responds to both direct and alternating currents (as opposed to the moving-coil ammeter, which works on direct current only). The iron element consists of a moving vane attached to a pointer, and a fixed vane, surrounded by a coil. As alternating or direct current flows through the coil and induces a magnetic field in both vanes, the vanes repel each other and the moving vane deflects against the restoring force provided by fine helical springs. The deflection of a moving iron meter is proportional to the square of the current. Consequently, such meters would normally have a nonlinear scale, but the iron parts are usually modified in shape to make the scale fairly linear over most of its range. Moving iron instruments indicate the RMS value of any AC waveform applied. Moving iron ammeters are commonly used to measure current in industrial frequency AC circuits. Hot-wire In a hot-wire ammeter, a current passes through a wire which expands as it heats. Although these instruments have slow response time and low accuracy, they were sometimes used in measuring radio-frequency current. These also measure true RMS for an applied AC. Digital In much the same way as the analogue ammeter formed the basis for a wide variety of derived meters, including voltmeters, the basic mechanism for a digital meter is a digital voltmeter mechanism, and other types of meter are built around this. Digital ammeter designs use a shunt resistor to produce a calibrated voltage proportional to the current flowing. This voltage is then measured by a digital voltmeter, through use of an analog-to-digital converter (ADC); the digital display is calibrated to display the current through the shunt. Such instruments are often calibrated to indicate the RMS value for a sine wave only, but many designs will indicate true RMS within limitations of the wave crest factor. Integrating There is also a range of devices referred to as integrating ammeters. In these ammeters the current is summed over time, giving as a result the product of current and time; which is proportional to the electrical charge transferred with that current. These can be used for metering energy (the charge needs to be multiplied by the voltage to give energy) or for estimating the charge of a battery or capacitor. Picoammeter A picoammeter, or pico ammeter, measures very low electric current, usually from the picoampere range at the lower end to the milliampere range at the upper end. Picoammeters are used where the current being measured is below the limits of sensitivity of other devices, such as multimeters. Most picoammeters use a "virtual short" technique and have several different measurement ranges that must be switched between to cover multiple decades of measurement. Other modern picoammeters use log compression and a "current sink" method that eliminates range switching and associated voltage spikes. Special design and usage considerations must be observed in order to reduce leakage current which may swamp measurements such as special insulators and driven shields. Triaxial cable is often used for probe connections. Application Ammeters must be connected in series with the circuit to be measured. For relatively small currents (up to a few amperes), an ammeter may pass the whole of the circuit current. For larger direct currents, a shunt resistor carries most of the circuit current and a small, accurately-known fraction of the current passes through the meter movement. For alternating current circuits, a current transformer may be used to provide a convenient small current to drive an instrument, such as 1 or 5 amperes, while the primary current to be measured is much larger (up to thousands of amperes). The use of a shunt or current transformer also allows convenient location of the indicating meter without the need to run heavy circuit conductors up to the point of observation. In the case of alternating current, the use of a current transformer also isolates the meter from the high voltage of the primary circuit. A shunt provides no such isolation for a direct-current ammeter, but where high voltages are used it may be possible to place the ammeter in the "return" side of the circuit which may be at low potential with respect to earth. Ammeters must not be connected directly across a voltage source since their internal resistance is very low and excess current would flow. Ammeters are designed for a low voltage drop across their terminals, much less than one volt; the extra circuit losses produced by the ammeter are called its "burden" on the measured circuit(I). Ordinary Weston-type meter movements can measure only milliamperes at most, because the springs and practical coils can carry only limited currents. To measure larger currents, a resistor called a shunt is placed in parallel with the meter. The resistances of shunts is in the integer to fractional milliohm range. Nearly all of the current flows through the shunt, and only a small fraction flows through the meter. This allows the meter to measure large currents. Traditionally, the meter used with a shunt has a full-scale deflection (FSD) of , so shunts are typically designed to produce a voltage drop of when carrying their full rated current. To make a multi-range ammeter, a selector switch can be used to connect one of a number of shunts across the meter. It must be a make-before-break switch to avoid damaging current surges through the meter movement when switching ranges. A better arrangement is the Ayrton shunt or universal shunt, invented by William E. Ayrton, which does not require a make-before-break switch. It also avoids any inaccuracy because of contact resistance. In the figure, assuming for example, a movement with a full-scale voltage of 50 mV and desired current ranges of 10 mA, 100 mA, and 1 A, the resistance values would be: R1 = 4.5 ohms, R2 = 0.45 ohm, R3 = 0.05 ohm. And if the movement resistance is 1000 ohms, for example, R1 must be adjusted to 4.525 ohms. Switched shunts are rarely used for currents above 10 amperes. Zero-center ammeters are used for applications requiring current to be measured with both polarities, common in scientific and industrial equipment. Zero-center ammeters are also commonly placed in series with a battery. In this application, the charging of the battery deflects the needle to one side of the scale (commonly, the right side) and the discharging of the battery deflects the needle to the other side. A special type of zero-center ammeter for testing high currents in cars and trucks has a pivoted bar magnet that moves the pointer, and a fixed bar magnet to keep the pointer centered with no current. The magnetic field around the wire carrying current to be measured deflects the moving magnet. Since the ammeter shunt has a very low resistance, mistakenly wiring the ammeter in parallel with a voltage source will cause a short circuit, at best blowing a fuse, possibly damaging the instrument and wiring, and exposing an observer to injury. In AC circuits, a current transformer can be used to convert the large current in the main circuit into a smaller current more suited to a meter. Some designs of transformer are able to directly convert the magnetic field around a conductor into a small AC current, typically either or at full rated current, that can be easily read by a meter. In a similar way, accurate AC/DC non-contact ammeters have been constructed using Hall effect magnetic field sensors. A portable hand-held clamp-on ammeter is a common tool for maintenance of industrial and commercial electrical equipment, which is temporarily clipped over a wire to measure current. Some recent types have a parallel pair of magnetically soft probes that are placed on either side of the conductor. See also Clamp meter Class of accuracy in electrical measurements Electric circuit Electrical measurements Electrical current#Measurement Electronics List of electronics topics Measurement category Multimeter Ohmmeter Rheoscope Voltmeter Notes References External links — from Lessons in Electric Circuits series main page Electrical meters Electronic test equipment Flow meters
Ammeter
[ "Chemistry", "Technology", "Engineering" ]
2,603
[ "Electronic test equipment", "Measuring instruments", "Fluid dynamics", "Electrical meters", "Flow meters" ]
2,885
https://en.wikipedia.org/wiki/Amoxicillin
Amoxicillin is an antibiotic medication belonging to the aminopenicillin class of the penicillin family. The drug is used to treat bacterial infections such as middle ear infection, strep throat, pneumonia, skin infections, odontogenic infections, and urinary tract infections. It is taken orally (swallowed by mouth), or less commonly by either intramuscular injection or by an IV bolus injection, which is a relatively quick intravenous injection lasting from a couple of seconds to a few minutes. Common adverse effects include nausea and rash. It may also increase the risk of yeast infections and, when used in combination with clavulanic acid, diarrhea. It should not be used in those who are allergic to penicillin. While usable in those with kidney problems, the dose may need to be decreased. Its use in pregnancy and breastfeeding does not appear to be harmful. Amoxicillin is in the β-lactam family of antibiotics. Amoxicillin was discovered in 1958 and came into medical use in 1972. Amoxil was approved for medical use in the United States in 1974, and in the United Kingdom in 1977. It is on the World Health Organization's List of Essential Medicines. It is one of the most commonly prescribed antibiotics in children. Amoxicillin is available as a generic medication. In 2022, it was the 26th most commonly prescribed medication in the United States, with more than 20million prescriptions. Medical uses Amoxicillin is used in the treatment of a number of infections, including acute otitis media, streptococcal pharyngitis, pneumonia, skin infections, urinary tract infections, Salmonella infections, Lyme disease, and chlamydia infections. Acute otitis media Children with acute otitis media who are younger than six months of age are generally treated with amoxicillin or other antibiotics. Although most children with acute otitis media who are older than two years old do not benefit from treatment with amoxicillin or other antibiotics, such treatment may be helpful in children younger than two years old with acute otitis media that is bilateral or accompanied by ear drainage. In the past, amoxicillin was dosed three times daily when used to treat acute otitis media, which resulted in missed doses in routine ambulatory practice. There is now evidence that two-times daily dosing or once-daily dosing has similar effectiveness. Respiratory infections Most sinusitis infections are caused by viruses, for which amoxicillin and amoxicillin-clavulanate are ineffective, and the small benefit gained by amoxicillin may be overridden by the adverse effects. Amoxicillin is considered the first-line empirical treatment for most cases of uncomplicated bacterial sinusitis in children and adults when culture data is unavailable. Amoxicillin is recommended as the preferred first-line treatment for community-acquired pneumonia in adults by the National Institute for Health and Care Excellence, either alone (mild to moderate severity disease) or in combination with a macrolide. Research suggests that is as effective as co-amoxiclav (a broad-spectrum antibiotic) for people admitted to hospital with pneumonia, regardless of its severity. The World Health Organization (WHO) recommends amoxicillin as first-line treatment for pneumonia that is not "severe". Amoxicillin is used in post-exposure inhalation of anthrax to prevent disease progression and for prophylaxis. H. pylori It is effective as one part of a multi-drug regimen for the treatment of stomach infections of Helicobacter pylori. It is typically combined with a proton-pump inhibitor (such as omeprazole) and a macrolide antibiotic (such as clarithromycin); other drug combinations are also effective. Lyme borreliosis Amoxicillin is effective for the treatment of early cutaneous Lyme borreliosis; the effectiveness and safety of oral amoxicillin is neither better nor worse than common alternatively-used antibiotics. Odontogenic infections Amoxicillin is used to treat odontogenic infections, infections of the tongue, lips, and other oral tissues. It may be prescribed following a tooth extraction, particularly in those with compromised immune systems. Skin infections Amoxicillin is occasionally used for the treatment of skin infections, such as acne vulgaris. It is often an effective treatment for cases of acne vulgaris that have responded poorly to other antibiotics, such as doxycycline and minocycline. Infections in infants in resource-limited settings Amoxicillin is recommended by the World Health Organization for the treatment of infants with signs and symptoms of pneumonia in resource-limited situations when the parents are unable or unwilling to accept hospitalization of the child. Amoxicillin in combination with gentamicin is recommended for the treatment of infants with signs of other severe infections when hospitalization is not an option. Prevention of bacterial endocarditis It is also used to prevent bacterial endocarditis and as a pain-reliever in high-risk people having dental work done, to prevent Streptococcus pneumoniae and other encapsulated bacterial infections in those without spleens, such as people with sickle-cell disease, and for both the prevention and the treatment of anthrax. The United Kingdom recommends against its use for infectious endocarditis prophylaxis. These recommendations do not appear to have changed the rates of infection for infectious endocarditis. Combination treatment Amoxicillin is susceptible to degradation by β-lactamase-producing bacteria, which are resistant to most β-lactam antibiotics, such as penicillin. For this reason, it may be combined with clavulanic acid, a β-lactamase inhibitor. This drug combination is commonly called co-amoxiclav. Spectrum of activity It is a moderate-spectrum, bacteriolytic, β-lactam antibiotic in the aminopenicillin family used to treat susceptible Gram-positive and Gram-negative bacteria. It is usually the drug of choice within the class because it is better absorbed, following oral administration, than other β-lactam antibiotics. In general, Streptococcus, Bacillus subtilis, Enterococcus, Haemophilus, Helicobacter, and Moraxella are susceptible to amoxicillin, whereas Citrobacter, Klebsiella and Pseudomonas aeruginosa are resistant to it. Some E. coli and most clinical strains of Staphylococcus aureus have developed resistance to amoxicillin to varying degrees. Adverse effects Adverse effects are similar to those for other β-lactam antibiotics, including nausea, vomiting, rashes, and antibiotic-associated colitis. Diarrhea (loose bowel movements) may also occur. Rarer adverse effects include mental and behavioral changes, lightheadedness, insomnia, hyperactivity, agitation, confusion, anxiety, sensitivity to lights and sounds, and unclear thinking. Immediate medical care is required upon the first signs of these adverse effects. Similarly to other penicillins, amoxicillin has been associated with an increased risk of seizures. Amoxicillin-induced neurotoxicity has been especially associated with concentrations of greater than 110mg/L. The onset of an allergic reaction to amoxicillin can be very sudden and intense; emergency medical attention must be sought as quickly as possible. The initial phase of such a reaction often starts with a change in mental state, skin rash with intense itching (often beginning in the fingertips and around the groin area and rapidly spreading), and sensations of fever, nausea, and vomiting. Any other symptoms that seem even remotely suspicious must be taken very seriously. However, more mild allergy symptoms, such as a rash, can occur at any time during treatment, even up to a week after treatment has ceased. For some people allergic to amoxicillin, the adverse effects can be fatal due to anaphylaxis. Use of the amoxicillin/clavulanic acid combination for more than one week has caused a drug-induced immunoallergic-type hepatitis in some patients. Young children having ingested acute overdoses of amoxicillin manifested lethargy, vomiting, and renal dysfunction. There is poor reporting of adverse effects of amoxicillin from clinical trials. For this reason, the severity and frequency of adverse effects from amoxicillin are probably higher than reported in clinical trials. Nonallergic rash Between 3 and 10% of children taking amoxicillin (or ampicillin) show a late-developing (>72 hours after beginning medication and having never taken penicillin-like medication previously) rash, which is sometimes referred to as the "amoxicillin rash". The rash can also occur in adults and may rarely be a component of the DRESS syndrome. The rash is described as maculopapular or morbilliform (measles-like; therefore, in medical literature, it is called "amoxicillin-induced morbilliform rash".). It starts on the trunk and can spread from there. This rash is unlikely to be a true allergic reaction and is not a contraindication for future amoxicillin usage, nor should the current regimen necessarily be stopped. However, this common amoxicillin rash and a dangerous allergic reaction cannot easily be distinguished by inexperienced persons, so a healthcare professional is often required to distinguish between the two. A nonallergic amoxicillin rash may also be an indicator of infectious mononucleosis. Some studies indicate about 80–90% of patients with acute Epstein–Barr virus infection treated with amoxicillin or ampicillin develop such a rash. Interactions Amoxicillin may interact with these drugs: Anticoagulants (dabigatran, warfarin). Methotrexate (chemotherapy and immunosuppressant). Typhoid, Cholera and BCG vaccines. Probenecid reduces renal excretion and increases blood levels of amoxicillin. Oral contraceptives potentially become less effective. Allopurinol (gout treatment). Mycophenolate (immunosuppressant) When given intravenously or intramuscularly: It should not be mixed with blood products, or proteinaceous fluids (including protein hydrolysates) or with intravenous lipid emulsions aminoglycoside should be injected at a separate site from amoxicillin if the patient is prescribed both medications at the same time. Neither drug should be mixed in a syringe. Neither should they be mixed in an intravenous fluid container or giving set because of loss of activity of the aminoglycoside under these conditions. ciprofloxacin should not be mixed with amoxicillin. Infusions containing dextran or bicarbonate should not be mixed with amoxicillin solutions. Pharmacology Amoxicillin (α-amino-p-hydroxybenzyl penicillin) is a semisynthetic derivative of penicillin with a structure similar to ampicillin but with better absorption when taken by mouth, thus yielding higher concentrations in blood and in urine. Amoxicillin diffuses easily into tissues and body fluids. It will cross the placenta and is excreted into breastmilk in small quantities. It is metabolized by the liver and excreted into the urine. It has an onset of 30 minutes and a half-life of 3.7 hours in newborns and 1.4 hours in adults. Amoxicillin attaches to the cell wall of susceptible bacteria and results in their death. It is effective against streptococci, pneumococci, enterococci, Haemophilus influenzae, Escherichia coli, Proteus mirabilis, Neisseria meningitidis, Neisseria gonorrhoeae, Shigella, Chlamydia trachomatis, Salmonella, Borrelia burgdorferi, and Helicobacter pylori. As a derivative of ampicillin, amoxicillin is a member of the penicillin family and, like penicillins, is a β-lactam antibiotic. It inhibits cross-linkage between the linear peptidoglycan polymer chains that make up a major component of the bacterial cell wall. It has two ionizable groups in the physiological range (the amino group in alpha-position to the amide carbonyl group and the carboxyl group). Chemistry Amoxicillin is a β-lactam and aminopenicillin antibiotic in terms of chemical structure. It is structurally related to ampicillin. The experimental log P of amoxicillin is 0.87. It is described as an "ambiphilic"—between hydrophilic and lipophilic—antibiotic. History Amoxicillin was one of several semisynthetic derivatives of 6-aminopenicillanic acid (6-APA) developed by the Beecham Group in the 1960s. It was invented by Anthony Alfred Walter Long and John Herbert Charles Nayler, two British scientists. It became available in 1972 and was the second aminopenicillin to reach the market (after ampicillin in 1961). Co-amoxiclav became available in 1981. Society and culture Economics Amoxicillin is relatively inexpensive. In 2022, a survey of eight generic antibiotics commonly prescribed in the United States found their average cost to be about $42.67, while amoxicillin was sold for $12.14 on average. Modes of delivery Pharmaceutical manufacturers make amoxicillin in trihydrate form, for oral use available as capsules, regular, chewable and dispersible tablets, syrup and pediatric suspension for oral use, and as the sodium salt for intravenous administration. An extended-release is available. The intravenous form of amoxicillin is not sold in the United States. When an intravenous aminopenicillin is required in the United States, ampicillin is typically used. When there is an adequate response to ampicillin, the course of antibiotic therapy may often be completed with oral amoxicillin. Research with mice indicated successful delivery using intraperitoneally injected amoxicillin-bearing microparticles. Names Amoxicillin is the international nonproprietary name (INN), British Approved Name (BAN), and United States Adopted Name (USAN), while amoxycillin is the Australian Approved Name (AAN). Amoxicillin is one of the semisynthetic penicillins discovered by the former pharmaceutical company Beecham Group. The patent for amoxicillin has expired, thus amoxicillin and co-amoxiclav preparations are marketed under various brand names across the world. Veterinary uses Amoxicillin is also sometimes used as an antibiotic for animals. The use of amoxicillin for animals intended for human consumption (chickens, cattle, and swine for example) has been approved. References Further reading Carboxylic acids Enantiopure drugs Drugs developed by GSK plc Lyme disease Penicillins Phenethylamines 4-Hydroxyphenyl compounds Wikipedia medicine articles ready to translate World Health Organization essential medicines
Amoxicillin
[ "Chemistry" ]
3,234
[ "Carboxylic acids", "Stereochemistry", "Functional groups", "Enantiopure drugs" ]
2,889
https://en.wikipedia.org/wiki/Amorphous%20solid
In condensed matter physics and materials science, an amorphous solid (or non-crystalline solid) is a solid that lacks the long-range order that is characteristic of a crystal. The terms "glass" and "glassy solid" are sometimes used synonymously with amorphous solid; however, these terms refer specifically to amorphous materials that undergo a glass transition. Examples of amorphous solids include glasses, metallic glasses, and certain types of plastics and polymers. Etymology The term comes from the Greek a ("without"), and morphé ("shape, form"). Structure Amorphous materials have an internal structure of molecular-scale structural blocks that can be similar to the basic structural units in the crystalline phase of the same compound. Unlike in crystalline materials, however, no long-range regularity exists: amorphous materials cannot be described by the repetition of a finite unit cell. Statistical measures, such as the atomic density function and radial distribution function, are more useful in describing the structure of amorphous solids. Although amorphous materials lack long range order, they exhibit localized order on small length scales. By convention, short range order extends only to the nearest neighbor shell, typically only 1-2 atomic spacings. Medium range order may extend beyond the short range order by 1-2 nm. Fundamental properties of amorphous solids Glass transition at high temperatures The freezing from liquid state to amorphous solid - glass transition - is considered one of the very important and unsolved problems of physics. Universal low-temperature properties of amorphous solids At very low temperatures (below 1-10 K), a large family of amorphous solids have various similar low-temperature properties. Although there are various theoretical models, neither glass transition nor low-temperature properties of glassy solids are well understood on the fundamental physics level. Amorphous solids is an important area of condensed matter physics aiming to understand these substances at high temperatures of glass transition and at low temperatures towards absolute zero. From the 1970s, low-temperature properties of amorphous solids were studied experimentally in great detail. For all of these substances, specific heat has a (nearly) linear dependence as a function of temperature, and thermal conductivity has nearly quadratic temperature dependence. These properties are conventionally called anomalous being very different from properties of crystalline solids. On the phenomenological level, many of these properties were described by a collection of tunnelling two-level systems. Nevertheless, the microscopic theory of these properties is still missing after more than 50 years of the research. Remarkably, a dimensionless quantity of internal friction is nearly universal in these materials. This quantity is a dimensionless ratio (up to a numerical constant) of the phonon wavelength to the phonon mean free path. Since the theory of tunnelling two-level states (TLSs) does not address the origin of the density of TLSs, this theory cannot explain the universality of internal friction, which in turn is proportional to the density of scattering TLSs. The theoretical significance of this important and unsolved problem was highlighted by Anthony Leggett. Nano-structured materials Amorphous materials will have some degree of short-range order at the atomic-length scale due to the nature of intermolecular chemical bonding. Furthermore, in very small crystals, short-range order encompasses a large fraction of the atoms; nevertheless, relaxation at the surface, along with interfacial effects, distorts the atomic positions and decreases structural order. Even the most advanced structural characterization techniques, such as X-ray diffraction and transmission electron microscopy, can have difficulty distinguishing amorphous and crystalline structures at short-size scales. Characterization of amorphous solids Due to the lack of long-range order, standard crystallographic techniques are often inadequate in determining the structure of amorphous solids. A variety of electron, X-ray, and computation-based techniques have been used to characterize amorphous materials. Multi-modal analysis is very common for amorphous materials. X-ray and neutron diffraction Unlike crystalline materials, which exhibit strong Bragg diffraction, the diffraction patterns of amorphous materials are characterized by broad and diffuse peaks. As a result, detailed analysis and complementary techniques are required to extract real space structural information from the diffraction patterns of amorphous materials. It is useful to obtain diffraction data from both X-ray and neutron sources as they have different scattering properties and provide complementary data. Pair distribution function analysis can be performed on diffraction data to determine the probability of finding a pair of atoms separated by a certain distance. Another type of analysis that is done with diffraction data of amorphous materials is radial distribution function analysis, which measures the number of atoms found at varying radial distances away from an arbitrary reference atom. From these techniques, the local order of an amorphous material can be elucidated. X-ray absorption fine-structure spectroscopy X-ray absorption fine-structure spectroscopy is an atomic scale probe making it useful for studying materials lacking in long-range order. Spectra obtained using this method provide information on the oxidation state, coordination number, and species surrounding the atom in question as well as the distances at which they are found. Atomic electron tomography The atomic electron tomography technique is performed in transmission electron microscopes capable of reaching sub-Angstrom resolution. A collection of 2D images taken at numerous different tilt angles is acquired from the sample in question and then used to reconstruct a 3D image. After image acquisition, a significant amount of processing must be done to correct for issues such as drift, noise, and scan distortion. High-quality analysis and processing using atomic electron tomography results in a 3D reconstruction of an amorphous material detailing the atomic positions of the different species that are present. Fluctuation electron microscopy Fluctuation electron microscopy is another transmission electron microscopy-based technique that is sensitive to the medium-range order of amorphous materials. Structural fluctuations arising from different forms of medium-range order can be detected with this method. Fluctuation electron microscopy experiments can be done in conventional or scanning transmission electron microscope mode. Computational techniques Simulation and modeling techniques are often combined with experimental methods to characterize structures of amorphous materials. Commonly used computational techniques include density functional theory, molecular dynamics, and reverse Monte Carlo. Uses and observations Amorphous thin films Amorphous phases are important constituents of thin films. Thin films are solid layers of a few nanometres to tens of micrometres thickness that are deposited onto a substrate. So-called structure zone models were developed to describe the microstructure of thin films as a function of the homologous temperature (Th), which is the ratio of deposition temperature to melting temperature. According to these models, a necessary condition for the occurrence of amorphous phases is that (Th) has to be smaller than 0.3. The deposition temperature must be below 30% of the melting temperature. Superconductivity Regarding their applications, amorphous metallic layers played an important role in the discovery of superconductivity in amorphous metals made by Buckel and Hilsch. The superconductivity of amorphous metals, including amorphous metallic thin films, is now understood to be due to phonon-mediated Cooper pairing. The role of structural disorder can be rationalized based on the strong-coupling Eliashberg theory of superconductivity. Thermal protection Amorphous solids typically exhibit higher localization of heat carriers compared to crystalline, giving rise to low thermal conductivity. Products for thermal protection, such as thermal barrier coatings and insulation, rely on materials with ultralow thermal conductivity. Technological uses Today, optical coatings made from TiO2, SiO2, Ta2O5 etc. (and combinations of these) in most cases consist of amorphous phases of these compounds. Much research is carried out into thin amorphous films as a gas-separating membrane layer. The technologically most important thin amorphous film is probably represented by a few nm thin SiO2 layers serving as isolator above the conducting channel of a metal-oxide semiconductor field-effect transistor (MOSFET). Also, hydrogenated amorphous silicon (Si:H) is of technical significance for thin-film solar cells. Pharmaceutical use In the pharmaceutical industry, some amorphous drugs have been shown to offer higher bioavailability than their crystalline counterparts as a result of the higher solubility of the amorphous phase. However, certain compounds can undergo precipitation in their amorphous form in vivo and can then decrease mutual bioavailability if administered together. Studies of GDC-0810 ASDs show a strong interrelationship between microstructure, physical properties and dissolution performance. In soils Amorphous materials in soil strongly influence bulk density, aggregate stability, plasticity, and water holding capacity of soils. The low bulk density and high void ratios are mostly due to glass shards and other porous minerals not becoming compacted. Andisol soils contain the highest amounts of amorphous materials. Phase Amorphous phases were a phenomenon of particular interest for the study of thin-film growth. The growth of polycrystalline films is often used and preceded by an initial amorphous layer, the thickness of which may amount to only a few nm. The most investigated example is represented by the unoriented molecules of thin polycrystalline silicon films. Wedge-shaped polycrystals were identified by transmission electron microscopy to grow out of the amorphous phase only after the latter has exceeded a certain thickness, the precise value of which depends on deposition temperature, background pressure, and various other process parameters. The phenomenon has been interpreted in the framework of Ostwald's rule of stages that predicts the formation of phases to proceed with increasing condensation time towards increasing stability. Notes References Further reading Phases of matter Unsolved problems in physics
Amorphous solid
[ "Physics", "Chemistry" ]
2,061
[ "Amorphous solids", "Unsolved problems in physics", "Phases of matter", "Matter" ]
2,909
https://en.wikipedia.org/wiki/Albinism%20in%20humans
Albinism is a congenital condition characterized in humans by the partial or complete absence of pigment in the skin, hair and eyes. Albinism is associated with a number of vision defects, such as photophobia, nystagmus, and amblyopia. Lack of skin pigmentation makes for more susceptibility to sunburn and skin cancers. In rare cases such as Chédiak–Higashi syndrome, albinism may be associated with deficiencies in the transportation of melanin granules. This also affects essential granules present in immune cells, leading to increased susceptibility to infection. Albinism results from inheritance of recessive gene alleles and is known to affect all vertebrates, including humans. It is due to absence or defect of tyrosinase, a copper-containing enzyme involved in the production of melanin. Unlike humans, other animals have multiple pigments and for these, albinism is considered to be a hereditary condition characterised by the absence of melanin in particular, in the eyes, skin, hair, scales, feathers or cuticle. While an organism with complete absence of melanin is called an albino, an organism with only a diminished amount of melanin is described as leucistic or albinoid. The term is from the Latin albus, "white". Signs and symptoms There are two principal types of albinism: oculocutaneous, affecting the eyes, skin and hair, and ocular affecting the eyes only. There are different types of oculocutaneous albinism depending on which gene has undergone mutation. With some there is no pigment at all. The other end of the spectrum of albinism is "a form of albinism called rufous oculocutaneous albinism, which usually affects dark-skinned people". According to the National Organization for Albinism and Hypopigmentation, "With ocular albinism, the color of the iris of the eye may vary from blue to green or even brown, and sometimes darkens with age. However, when an optometrist or ophthalmologist examines the eye by shining a light from the side of the eye, the light shines back through the iris since very little pigment is present." Because individuals with albinism have skin that entirely lacks the dark pigment melanin, which helps protect the skin from the sun's ultraviolet radiation, their skin can burn more easily from overexposure. The human eye normally produces enough pigment to color the iris blue, green or brown and lend opacity to the eye. In photographs, those with albinism are more likely to demonstrate "red eye", due to the red of the retina being visible through the iris. Lack of pigment in the eyes also results in problems with vision, both related and unrelated to photosensitivity. Those with albinism are generally as healthy as the rest of the population (but see related disorders below), with growth and development occurring as normal, and albinism by itself does not cause mortality, although the lack of pigment blocking ultraviolet radiation increases the risk of melanomas (skin cancers) and other problems. Visual problems Development of the optical system is highly dependent on the presence of melanin. For this reason, the reduction or absence of this pigment in people with albinism may lead to: Misrouting of the retinogeniculate projections, resulting in abnormal decussation (crossing) of optic nerve fibres Photophobia and decreased visual acuity due to light scattering within the eye (ocular straylight) Reduced visual acuity due to foveal hypoplasia and possibly light-induced retinal damage. Eye conditions common in albinism include: Nystagmus, irregular rapid movement of the eyes back and forth, or in circular motion. Amblyopia, decrease in acuity of one or both eyes due to poor transmission to the brain, often due to other conditions such as strabismus. Optic nerve hypoplasia, underdevelopment of the optic nerve. The improper development of the retinal pigment epithelium (RPE), which in normal eyes absorbs most of the reflected sunlight, further increases glare due to light scattering within the eye. The resulting sensitivity (photophobia) generally leads to discomfort in bright light, but this can be reduced by the use of sunglasses or brimmed hats. Genetics Oculocutaneous albinism is generally the result of the biological inheritance of genetically recessive alleles (genes) passed from both parents of an individual such as OCA1 and OCA2. A mutation in the human TRP-1 gene may result in the deregulation of melanocyte tyrosinase enzymes, a change that is hypothesized to promote brown versus black melanin synthesis, resulting in a third oculocutaneous albinism (OCA) genotype, "OCA3". Some rare forms are inherited from only one parent. There are other genetic mutations which are proven to be associated with albinism. All alterations, however, lead to changes in melanin production in the body. The chance of offspring with albinism resulting from the pairing of an organism with albinism and one without albinism is low. However, because organisms (including humans) can be carriers of genes for albinism without exhibiting any traits, albinistic offspring can be produced by two non-albinistic parents. Albinism usually occurs with equal frequency in both sexes. An exception to this is ocular albinism, which it is passed on to offspring through X-linked inheritance. Thus, ocular albinism occurs more frequently in males as they have a single X and Y chromosome, unlike females, whose genetics are characterized by two X chromosomes. There are two different forms of albinism: a partial lack of the melanin is known as hypomelanism, or hypomelanosis, and the total absence of melanin is known as amelanism or amelanosis. Enzyme The enzyme defect responsible for OCA1-type albinism is tyrosine 3-monooxygenase (tyrosinase), which synthesizes melanin from the amino acid tyrosine. Evolutionary theories It is suggested that the early genus Homo (humans in the broader sense) started to evolve in East Africa around 3 million years ago. The dramatic phenotypic change from the ape-like Australopithecus to early Homo is hypothesized to have involved the extreme loss of body hair – except for areas most exposed to UV radiation, such as the head – to allow for more efficient thermoregulation in the early hunter-gatherers. The skin that would have been exposed upon general body hair loss in these early proto-humans would have most likely been non-pigmented, reflecting the pale skin underlying the hair of our chimpanzee relatives. A positive advantage would have been conferred to early hominids inhabiting the African continent that were capable of producing darker skin – those who first expressed the eumelanin-producing MC1R allele – which protected them from harmful epithelium-damaging ultraviolet rays. Over time, the advantage conferred to those with darker skin may have led to the prevalence of darker skin on the continent. The positive advantage, however, would have had to be strong enough so as to produce a significantly higher reproductive fitness in those who produced more melanin. The cause of a selective pressure strong enough to cause this shift is an area of much debate. Some hypotheses include the existence of significantly lower reproductive fitness in people with less melanin due to lethal skin cancer, lethal kidney disease due to excess vitamin D formation in the skin of people with less melanin, or simply natural selection due to mate preference and sexual selection. When comparing the prevalence of albinism in Africa to its prevalence in other parts of the world, such as Europe and the United States, the potential evolutionary effects of skin cancer as a selective force due to its effect on these populations may not be insignificant. It would follow, then, that there would be stronger selective forces acting on albino individuals in Africa than on albinos in Europe and the US. In two separate studies in Nigeria, very few people with albinism appear to survive to old age. One study found that 89% of people diagnosed with albinism are between 0 and 30 years of age, while the other found that 77% of albinos were under the age of 20. However, it has also been theorized that albinism may have been able to spread in some Native American communities, because albino males were culturally revered and assumed as having divine origins. The very high incidence of albinism among the Hopi tribe has been frequently attributed to the privileged status of albino males in Hopi society, who were not required to perform physical work outdoors, shielding them from the harmful effects of UV radiation. This privileged status of albino males in Hopi society allowed them to reproduce with large numbers of non-albino women, spreading the genes that are associated with albinism. Diagnosis Genetic testing can confirm albinism and what variety it is, but offers no medical benefits, except in the case of non-OCA disorders. Such disorders cause other medical problems in conjunction with albinism, and may be treatable. Genetic tests are currently available for parents who want to find out if they are carriers of ty-neg albinism. Diagnosis of albinism involves carefully examining a person's eyes, skin and hairs. Genealogical analysis can also help. Albinism can also be a feature of several syndromes: ABCD syndrome Albinism-hearing loss syndrome Deafness, congenital, with total albinism Ermine phenotype Hermansky-Pudlak syndrome 1 to 11 (excluding 9) Microcephaly-albinism-digital anomalies syndrome Ocular albinism with late-onset sensorineural deafness Ocular albinism, type II Oculocutaneous albinism types 1B, 3 to 7 Tyrosinase-negative oculocutaneous albinism Tyrosinase-positive oculocutaneous albinism Vici syndrome Waardenburg syndrome type 2A Management Since there is no cure for albinism, it is managed through lifestyle adjustments. People with albinism need to take care not to get sunburnt and should have regular healthy skin checks by a dermatologist. For the most part, treatment of the eye conditions consists of visual rehabilitation. Surgery is possible on the extra-ocular muscles to decrease strabismus. Nystagmus-damping surgery can also be performed, to reduce the "shaking" of the eyes back and forth. The effectiveness of all these procedures varies greatly and depends on individual circumstances. Glasses (often with tinted lenses), low vision aids, large-print materials, and bright angled reading lights can help individuals with albinism. Some people with albinism do well using bifocals (with a strong reading lens), prescription reading glasses, hand-held devices such as magnifiers or monoculars or wearable devices like eSight and Brainport. The condition may lead to abnormal development of the optic nerve and sunlight may damage the retina of the eye as the iris cannot filter out excess light due to a lack of pigmentation. Photophobia may be ameliorated by the use of sunglasses which filter out ultraviolet light. Some use bioptics, glasses which have small telescopes mounted on, in, or behind their regular lenses, so that they can look through either the regular lens or the telescope. Newer designs of bioptics use smaller light-weight lenses. Some US states allow the use of bioptic telescopes for driving motor vehicles. (See also NOAH bulletin "Low Vision Aids".) There are a number of national support groups across the globe which come under the umbrella of the World Albinism Alliance. Epidemiology Albinism affects people of all ethnic backgrounds; its frequency worldwide is estimated to be approximately one in 17,000. Prevalence of the different forms of albinism varies considerably by population, and is highest overall in people of sub-Saharan African descent. Today, the prevalence of albinism in sub-Saharan Africa is around 1 in 5,000, while in Europe and the US it is around 1 in 20,000 of the European derived population. Rates as high as 1 in 1,000 have been reported for some populations in Zimbabwe and other parts of Southern Africa. Certain ethnic groups and populations in isolated areas exhibit heightened susceptibility to albinism, presumably due to genetic factors. These include notably the Native American Kuna, Zuni and Hopi nations (respectively of Panama, New Mexico and Arizona); Japan, in which one particular form of albinism is unusually common (OCA 4); and Ukerewe Island, the population of which shows a very high incidence of albinism. Society and culture Special status of albinos in Native American culture In some Native American and South Pacific cultures, people with albinism have been traditionally revered, because they were considered heavenly beings associated with the sky. Among various indigenous tribes in South America, albinos were able to live luxurious lives due to their divine status. This special status was applied mainly to male albinos. It has been theorized that the very high level of albinism among some Native American tribes can be attributed to sexual privileges given to male albinos, which allowed them to reproduce with large numbers of non-albino women in their tribes, leading to the spread of genes that are associated with albinism. Persecution of people with albinism Humans with albinism often face social and cultural challenges (even threats), as the condition is often a source of ridicule, discrimination, or even fear and violence. It is especially socially stigmatised in many African societies. A study conducted in Nigeria on albino children stated that "they experienced alienation, avoided social interactions and were less emotionally stable. Furthermore, affected individuals were less likely to complete schooling, find employment, and find partners". Many cultures around the world have developed beliefs regarding people with albinism. In African countries such as Tanzania and Burundi, there has been an unprecedented rise in witchcraft-related killings of people with albinism in recent years, because their body parts are used in potions sold by witch doctors. Numerous authenticated incidents have occurred in Africa during the 21st century. For example, in Tanzania, in September 2009, three men were convicted of killing a 14-year-old albino boy and severing his legs in order to sell them for witchcraft purposes. Again in Tanzania and Burundi in 2010, the murder and dismemberment of a kidnapped albino child was reported from the courts, as part of a continuing problem. The US-based National Geographic Society estimated that in Tanzania a complete set of albino body parts is worth US$75,000. Another harmful and false belief is that sex with an albinistic woman will cure a man of HIV. This has led, for example in Zimbabwe, to rapes (and subsequent HIV infection). Albinism in popular culture Famous people with albinism include historical figures such as Oxford don William Archibald Spooner; actor-comedian Victor Varnado; musicians such as Johnny and Edgar Winter, Salif Keita, Winston "Yellowman" Foster, Brother Ali, Sivuca, Hermeto Pascoal, Willie "Piano Red" Perryman, Kalash Criminel; actor-rapper Krondon, and fashion models Connie Chiu, Ryan "La Burnt" Byrne and Shaun Ross. Emperor Seinei of Japan is thought to have albinism because he was said to have been born with white hair. International Albinism Awareness Day International Albinism Awareness Day was established after a motion was accepted on 18 December 2014 by the United Nations General Assembly, proclaiming that 13 June would be known as International Albinism Awareness Day as of 2015. This was followed by a mandate created by the United Nations Human Rights Council that appointed Ms. Ikponwosa Ero, who is from Nigeria, as the first Independent Expert on the enjoyment of human rights by persons with albinism. See also References External links GeneReview/NCBI/NIH/UW entry on Oculocutaneous Albinism Type 2 GeneReview/NCBI/NIH/UW entry on Oculocutaneous Albinism Type 4 Autosomal recessive disorders Dermatologic terminology Disturbances of human pigmentation Human skin color
Albinism in humans
[ "Biology" ]
3,472
[ "Human skin color", "Pigmentation" ]
2,923
https://en.wikipedia.org/wiki/AIM%20%28software%29
AIM (AOL Instant Messenger, sometimes stylized as aim) was an instant messaging and presence computer program created by AOL, which used the proprietary OSCAR instant messaging protocol and the TOC protocol to allow registered users to communicate in real time. AIM was popular by the late 1990s, in United States and other countries, and was the leading instant messaging application in that region into the following decade. Teens and college students were known to use the messenger's away message feature to keep in touch with friends, often frequently changing their away message throughout a day or leaving a message up with one's computer left on to inform buddies of their ongoings, location, parties, thoughts, or jokes. AIM's popularity declined as AOL subscribers started decreasing and steeply towards the 2010s, as Gmail's Google Talk, SMS, and Internet social networks, like Facebook gained popularity. Its fall has often been compared with other once-popular Internet services, such as Myspace. In June 2015, AOL was acquired by Verizon Communications. In June 2017, Verizon combined AOL and Yahoo into its subsidiary Oath Inc. (now called Yahoo). The company discontinued AIM as a service on December 15, 2017. History In May 1997, AIM was released unceremoniously as a stand-alone download for Microsoft Windows. AIM was an outgrowth of "online messages" in the original platform written in PL/1 on a Stratus computer by Dave Brown. At one time, the software had the largest share of the instant messaging market in North America, especially in the United States (with 52% of the total reported ). This does not include other instant messaging software related to or developed by AOL, such as ICQ and iChat. During its heyday, its main competitors were ICQ (which AOL acquired in 1998), Yahoo! Messenger and MSN Messenger. AOL particularly had a rivalry or "chat war" with PowWow and Microsoft, starting in 1999. There were several attempts from Microsoft to simultaneously log into their own and AIM's protocol servers. AOL was unhappy about this and started blocking MSN Messenger from being able to access AIM. This led to efforts by many companies to challenge the AOL and Time Warner merger on the grounds of antitrust behaviour, leading to the formation of the OpenNet Coalition. Official mobile versions of AIM appeared as early as 2001 on Palm OS through the AOL application. Third-party applications allowed it to be used in 2002 for the Sidekick. A version for Symbian OS was announced in 2003 as were others for BlackBerry and Windows Mobile After 2012, stand-alone official AIM client software included advertisements and was available for Microsoft Windows, Windows Mobile, Classic Mac OS, macOS, Android, iOS, and BlackBerry OS. Usage decline and product sunset Around 2011, AIM started to lose popularity rapidly, partly due to the quick rise of Gmail and its built-in real-time Google Chat instant messenger integration in 2011 and because many people migrated to SMS or iMessages text messaging and later, social networking websites and apps for instant messaging, in particular, Facebook Messenger, which was released as a standalone application the same year. AOL made a partnership to integrate AIM messaging in Google Talk, and had a feature for AIM users to send SMS messages directly from AIM to any number, as well as for SMS users to send an IM to any AIM user. As of June 2011, one source reported AOL Instant Messenger market share had collapsed to 0.73%. However, this number only reflected installed IM applications, and not active users. The engineers responsible for AIM claimed that they were unable to convince AOL management that free was the future. On March 3, 2012, AOL ended employment of AIM's development staff while leaving it active and with help support still provided. On October 6, 2017, it was announced that the AIM service would be discontinued on December 15; however, a non-profit development team known as Wildman Productions started up a server for older versions of AOL Instant Messenger, known as AIM Phoenix. The "Running Man" The AIM mascot was designed by JoRoan Lazaro and was implemented in the first release in 1997. This was a yellow stickman-like figure, often called the "Running Man". The mascot appeared on all AIM logos and most wordmarks, and always appeared at the top of the buddy list. AIM's popularity in the late 1990s and the 2000s led to the “Running Man” becoming a familiar brand on the Internet. After over 14 years, the iconic logo disappeared as part of the AIM rebranding in 2011. However, in August 2013, the "Running Man" returned. It was used for other AOL services like AOL Top Speed and is still featured in a theme on AOL Mail. In 2014, a Complex editor called it a "symbol of America". In April 2015, the Running Man was officially featured in the Virgin London Marathon, dressed by a person for the AOL-partnered Free The Children charity. Protocol The standard protocol that AIM clients used to communicate is called Open System for CommunicAtion in Realtime (OSCAR). Most AOL-produced versions of AIM and popular third party AIM clients use this protocol. However, AOL also created a simpler protocol called TOC that lacks many of OSCAR's features, but was sometimes used for clients that only require basic chat functionality. The TOC/TOC2 protocol specifications were made available by AOL, while OSCAR is a closed protocol that third parties had to reverse-engineer. In January 2008, AOL introduced experimental Extensible Messaging and Presence Protocol (XMPP) support for AIM, allowing AIM users to communicate using the standardized, open-source XMPP. However, in March 2008, this service was discontinued. In May 2011, AOL started offering limited XMPP support. On March 1, 2017, AOL announced (via XMPP-login-time messages) that the AOL XMPP gateway would be desupported, effective March 28, 2017. Privacy For privacy regulations, AIM had strict age restrictions. AIM accounts are available only for people over the age of 13; children younger than that were not permitted access to AIM. Under the AIM Privacy Policy, AOL had no rights to read or monitor any private communications between users. The profile of the user had no privacy. In November 2002, AOL targeted the corporate industry with Enterprise AIM Services (EAS), a higher security version of AIM. If public content was accessed, it could be used for online, print or broadcast advertising, etc. This was outlined in the policy and terms of service: "... you grant AOL, its parent, affiliates, subsidiaries, assigns, agents and licensees the irrevocable, perpetual, worldwide right to reproduce, display, perform, distribute, adapt and promote this Content in any medium". This allowed anything users posted to be used without a separate request for permission. AIM's security was called into question. AOL stated that it had taken great pains to ensure that personal information will not be accessed by unauthorized members, but that it cannot guarantee that it will not happen. AIM was different from other clients, such as Yahoo! Messenger, in that it did not require approval from users to be added to other users' buddy lists. As a result, it was possible for users to keep other unsuspecting users on their buddy list to see when they were online, read their status and away messages, and read their profiles. There was also a Web API to display one's status and away message as a widget on one's webpage. Though one could block a user from communicating with them and seeing their status, this did not prevent that user from creating a new account that would not automatically be blocked and therefore able to track their status. A more conservative privacy option was to select a menu feature that only allowed communication with users on one's buddy list; however, this option also created the side-effect of blocking all users who were not on one's buddy list. Users could also choose to be invisible to all. Chat robots AOL and various other companies supplied robots (bots) on AIM that could receive messages and send a response based on the bot's purpose. For example, bots could help with studying, like StudyBuddy. Some were made to relate to children and teenagers, like Spleak. Others gave advice. The more useful chat bots had features like the ability to play games, get sport scores, weather forecasts or financial stock information. Users were able to talk to automated chat bots that could respond to natural human language. They were primarily put into place as a marketing strategy and for unique advertising options. It was used by advertisers to market products or build better consumer relations. Before the inclusions of such bots, the other bots DoorManBot and AIMOffline provided features that were provided by AOL for those who needed it. ZolaOnAOL and ZoeOnAOL were short-lived bots that ultimately retired their features in favor of SmarterChild. URI scheme AOL Instant Messenger's installation process automatically installed an extra URI scheme ("protocol") handler into some Web browsers, so URIs beginning with aim: could open a new AIM window with specified parameters. This was similar in function to the mailto: URI scheme, which created a new e-mail message using the system's default mail program. For instance, a webpage might have included a link like the following in its HTML source to open a window for sending a message to the AIM user notarealuser: <a href="aim:goim?screenname=notarealuser">Send Message</a> To specify a message body, the message parameter was used, so the link location would have looked like this: aim:goim?screenname=notarealuser&message=This+is+my+message To specify an away message, the message parameter was used, so the link location would have looked like this: aim:goaway?message=Hello,+my+name+is+Bill When placing this inside a URL link, an AIM user could click on the URL link and the away message "Hello, my name is Bill" would instantly become their away message. To add a buddy, the addbuddy message was used, with the "screenname" parameter aim:addbuddy?screenname=notarealuser This type of link was commonly found on forum profiles to easily add contacts. Vulnerabilities AIM had security weaknesses that have enabled exploits to be created that used third-party software to perform malicious acts on users' computers. Although most were relatively harmless, such as being kicked off the AIM service, others performed potentially dangerous actions, such as sending viruses. Some of these exploits relied on social engineering to spread by automatically sending instant messages that contained a Uniform Resource Locator (URL) accompanied by text suggesting the receiving user click on it, an action which leads to infection, i.e., a trojan horse. These messages could easily be mistaken as coming from a friend and contain a link to a Web address that installed software on the user's computer to restart the cycle. Extra features iPhone application On March 6, 2008, during Apple Inc.'s iPhone SDK event, AOL announced that they would be releasing an AIM application for iPhone and iPod Touch users. The application was available for free from the App Store, but the company also provided a paid version, which displayed no advertisements. Both were available from the App Store. The AIM client for iPhone and iPod Touch supported standard AIM accounts, as well as MobileMe accounts. There was also an express version of AIM accessible through the Safari browser on the iPhone and iPod Touch. In 2011, AOL launched an overhaul of their Instant Messaging service. Included in the update was a brand new iOS application for iPhone and iPod Touch that incorporated all the latest features. A brand new icon was used for the application, featuring the new cursive logo for AIM. The user-interface was entirely redone for the features including: a new buddy list, group messaging, in-line photos and videos, as well as improved file-sharing. Version 5.0.5, updated in March 2012, it supported more social stream features, much like Facebook and Twitter, as well as the ability to send voice messages up to 60 seconds long. iPad application On April 3, 2010, Apple released the first generation iPad. Along with this newly released device AOL released the AIM application for iPad. It was built entirely from scratch for the new version of iOS with a specialized user-interface for the device. It supported geolocation, Facebook status updates and chat, Myspace, Twitter, YouTube, Foursquare, and many other social networking platforms. AIM Express AIM Express ran in a pop-up browser window. It was intended for use by people who are unwilling or unable to install a standalone application or those at computers that lack the AIM application. AIM Express supported many of the standard features included in the stand-alone client, but did not provide advanced features like file transfer, audio chat, video conferencing, or buddy info. It was implemented in Adobe Flash. It was an upgrade to the prior AOL Quick Buddy, which was later available for older systems that cannot handle Express before being discontinued. Express and Quick Buddy were similar to MSN Web Messenger and Yahoo! Web Messenger. This web version evolved into AIM.com's web-based messenger. AIM Pages AIM Pages was a free website released in May 2006 by AOL in replacement of AIMSpace. Anyone who had an AIM user name and was at least 16 years of age could create their own web page (to display an online, dynamic profile) and share it with buddies from their AIM Buddy list. Layout AIM Pages included links to the email and Instant Message of the owner, along with a section listing the owners "buddies", which included AIM user names. It was possible to create modules in a Module T microformat. Video hosting sites like Netflix and YouTube could be added to ones AIM Page, as well as other sites like Amazon.com. It was also possible to insert HTML code. The main focus of AIM Pages was the integration of external modules, like those listed above, into the AOL Instant Messenger experience. Discontinuation By late 2007, AIM Pages were discontinued. After AIM Pages shutdown, links to AIM Pages were redirected to AOL Lifestream, AOL's new site aimed at collecting external modules in one place, independent of AIM buddies. AOL Lifestream was shut down February 24, 2017. AIM for Mac AOL released an all-new AIM for the Mac on September 29, 2008, and the final build on December 15, 2008. The redesigned AIM for Mac is a full universal binary Cocoa API application that supports both Tiger and Leopard — Mac OS X 10.4.8 (and above) or Mac OS X 10.5.3 (and above). On October 1, 2009, AOL released AIM 2.0 for Mac. AIM real-time IM This feature was available for AIM 7 and allowed for a user to see what the other is typing as it is being done. It was developed and built with assistance from Trace Research and Development Centre at University of Wisconsin–Madison and Gallaudet University. The application provides visually impaired users the ability to convert messages from text (words) to speech. For the application to work users must have AIM 6.8 or higher, as it is not compatible with older versions of AIM software, AIM for Mac or iChat. AIM to mobile (messaging to phone numbers) This feature allows text messaging to a phone number (text messaging is less functional than instant messaging). Discontinued features AIM Phoneline AIM Phoneline was a Voice over IP PC-PC, PC-Phone and Phone-to-PC service provided via the AIM application. It was also known to work with Apple's iChat Client. The service was officially closed to its customers on January 13, 2009. The closing of the free service caused the number associated with the service to be disabled and not transferable for a different service. AIM Phoneline website was recommending users switch to a new service named AIM Call Out, also discontinued now. Launched on May 16, 2006, AIM Phoneline provided users the ability to have several local numbers, allowing AIM users to receive free incoming calls. The service allowed users to make calls to landlines and mobile devices through the use of a computer. The service, however, was only free for receiving and AOL charged users $14.95 a month for an unlimited calling plan. In order to use AIM Phoneline users had to install the latest free version of AIM Triton software and needed a good set of headphones with a boom microphone. It could take several days after a user signed up before it started working. AIM Call Out AIM Call Out is a discontinued Voice over IP PC-PC, PC-Phone and Phone-to-PC service provided by AOL via its AIM application that replaced the defunct AIM Phoneline service in November 2007. It did not depend on the AIM client and could be used with only an AIM screenname via the WebConnect feature or a dedicated SIP device. The AIM Call Out service was shut down on March 25, 2009. Security On November 4, 2014, AIM scored one out of seven points on the Electronic Frontier Foundation's secure messaging scorecard. AIM received a point for encryption during transit, but lost points because communications are not encrypted with a key to which the provider has no access, i.e., the communications are not end-to-end encrypted, users can't verify contacts' identities, past messages are not secure if the encryption keys are stolen, (i.e., the service does not provide forward secrecy), the code is not open to independent review, (i.e., the code is not open-source), the security design is not properly documented, and there has not been a recent independent security audit. BlackBerry Messenger (BBM), Ebuddy XMS, Hushmail, Kik Messenger, Skype, Viber, and Yahoo! Messenger also scored one out of seven points. See also Comparison of cross-platform instant messaging clients List of defunct instant messaging platforms References External links 1997 software Android (operating system) software Instant Messenger BlackBerry software Classic Mac OS instant messaging clients Cross-platform software Defunct instant messaging clients Instant messaging clients Internet properties disestablished in 2017 IOS software MacOS instant messaging clients Online chat Symbian software Unix instant messaging clients Videotelephony Windows instant messaging clients
AIM (software)
[ "Technology" ]
3,850
[ "Instant messaging", "Instant messaging clients" ]
2,925
https://en.wikipedia.org/wiki/Ackermann%20function
In computability theory, the Ackermann function, named after Wilhelm Ackermann, is one of the simplest and earliest-discovered examples of a total computable function that is not primitive recursive. All primitive recursive functions are total and computable, but the Ackermann function illustrates that not all total computable functions are primitive recursive. After Ackermann's publication of his function (which had three non-negative integer arguments), many authors modified it to suit various purposes, so that today "the Ackermann function" may refer to any of numerous variants of the original function. One common version is the two-argument Ackermann–Péter function developed by Rózsa Péter and Raphael Robinson. This function is defined from the recurrence relation with appropriate base cases. Its value grows very rapidly; for example, results in , an integer with 19,729 decimal digits. History In the late 1920s, the mathematicians Gabriel Sudan and Wilhelm Ackermann, students of David Hilbert, were studying the foundations of computation. Both Sudan and Ackermann are credited with discovering total computable functions (termed simply "recursive" in some references) that are not primitive recursive. Sudan published the lesser-known Sudan function, then shortly afterwards and independently, in 1928, Ackermann published his function (from Greek, the letter phi). Ackermann's three-argument function, , is defined such that for , it reproduces the basic operations of addition, multiplication, and exponentiation as and for p > 2 it extends these basic operations in a way that can be compared to the hyperoperations: (Aside from its historic role as a total-computable-but-not-primitive-recursive function, Ackermann's original function is seen to extend the basic arithmetic operations beyond exponentiation, although not as seamlessly as do variants of Ackermann's function that are specifically designed for that purpose—such as Goodstein's hyperoperation sequence.) In On the Infinite, David Hilbert hypothesized that the Ackermann function was not primitive recursive, but it was Ackermann, Hilbert's personal secretary and former student, who actually proved the hypothesis in his paper On Hilbert's Construction of the Real Numbers. Rózsa Péter and Raphael Robinson later developed a two-variable version of the Ackermann function that became preferred by almost all authors. The generalized hyperoperation sequence, e.g. , is a version of the Ackermann function as well. In 1963 R.C. Buck based an intuitive two-variable variant on the hyperoperation sequence: Compared to most other versions, Buck's function has no unessential offsets: Many other versions of Ackermann function have been investigated. Definition Definition: as m-ary function Ackermann's original three-argument function is defined recursively as follows for nonnegative integers and : Of the various two-argument versions, the one developed by Péter and Robinson (called "the" Ackermann function by most authors) is defined for nonnegative integers and as follows: The Ackermann function has also been expressed in relation to the hyperoperation sequence: or, written in Knuth's up-arrow notation (extended to integer indices ): or, equivalently, in terms of Buck's function F: Definition: as iterated 1-ary function Define as the n-th iterate of : Iteration is the process of composing a function with itself a certain number of times. Function composition is an associative operation, so . Conceiving the Ackermann function as a sequence of unary functions, one can set . The function then becomes a sequence of unary functions, defined from iteration: Computation The recursive definition of the Ackermann function can naturally be transposed to a term rewriting system (TRS). TRS, based on 2-ary function The definition of the 2-ary Ackermann function leads to the obvious reduction rules Example Compute The reduction sequence is To compute one can use a stack, which initially contains the elements . Then repeatedly the two top elements are replaced according to the rules Schematically, starting from : WHILE stackLength <> 1 { POP 2 elements; PUSH 1 or 2 or 3 elements, applying the rules r1, r2, r3 } The pseudocode is published in . For example, on input , Remarks The leftmost-innermost strategy is implemented in 225 computer languages on Rosetta Code. For all the computation of takes no more than steps. pointed out that in the computation of the maximum length of the stack is , as long as . Their own algorithm, inherently iterative, computes within time and within space. TRS, based on iterated 1-ary function The definition of the iterated 1-ary Ackermann functions leads to different reduction rules As function composition is associative, instead of rule r6 one can define Like in the previous section the computation of can be implemented with a stack. Initially the stack contains the three elements . Then repeatedly the three top elements are replaced according to the rules Schematically, starting from : WHILE stackLength <> 1 { POP 3 elements; PUSH 1 or 3 or 5 elements, applying the rules r4, r5, r6; } Example On input the successive stack configurations are The corresponding equalities are When reduction rule r7 is used instead of rule r6, the replacements in the stack will follow The successive stack configurations will then be The corresponding equalities are Remarks On any given input the TRSs presented so far converge in the same number of steps. They also use the same reduction rules (in this comparison the rules r1, r2, r3 are considered "the same as" the rules r4, r5, r6/r7 respectively). For example, the reduction of converges in 14 steps: 6 × r1, 3 × r2, 5 × r3. The reduction of converges in the same 14 steps: 6 × r4, 3 × r5, 5 × r6/r7. The TRSs differ in the order in which the reduction rules are applied. When is computed following the rules {r4, r5, r6}, the maximum length of the stack stays below . When reduction rule r7 is used instead of rule r6, the maximum length of the stack is only . The length of the stack reflects the recursion depth. As the reduction according to the rules {r4, r5, r7} involves a smaller maximum depth of recursion, this computation is more efficient in that respect. TRS, based on hyperoperators As — or — showed explicitly, the Ackermann function can be expressed in terms of the hyperoperation sequence: or, after removal of the constant 2 from the parameter list, in terms of Buck's function Buck's function , a variant of Ackermann function by itself, can be computed with the following reduction rules: Instead of rule b6 one can define the rule To compute the Ackermann function it suffices to add three reduction rules These rules take care of the base case A(0,n), the alignment (n+3) and the fudge (-3). Example Compute The matching equalities are when the TRS with the reduction rule is applied: when the TRS with the reduction rule is applied: Remarks The computation of according to the rules {b1 - b5, b6, r8 - r10} is deeply recursive. The maximum depth of nested s is . The culprit is the order in which iteration is executed: . The first disappears only after the whole sequence is unfolded. The computation according to the rules {b1 - b5, b7, r8 - r10} is more efficient in that respect. The iteration simulates the repeated loop over a block of code. The nesting is limited to , one recursion level per iterated function. showed this correspondence. These considerations concern the recursion depth only. Either way of iterating leads to the same number of reduction steps, involving the same rules (when the rules b6 and b7 are considered "the same"). The reduction of for instance converges in 35 steps: 12 × b1, 4 × b2, 1 × b3, 4 × b5, 12 × b6/b7, 1 × r9, 1 × r10. The modus iterandi only affects the order in which the reduction rules are applied. A real gain of execution time can only be achieved by not recalculating subresults over and over again. Memoization is an optimization technique where the results of function calls are cached and returned when the same inputs occur again. See for instance . published a cunning algorithm which computes within time and within space. Huge numbers To demonstrate how the computation of results in many steps and in a large number: Table of values Computing the Ackermann function can be restated in terms of an infinite table. First, place the natural numbers along the top row. To determine a number in the table, take the number immediately to the left. Then use that number to look up the required number in the column given by that number and one row up. If there is no number to its left, simply look at the column headed "1" in the previous row. Here is a small upper-left portion of the table: The numbers here which are only expressed with recursive exponentiation or Knuth arrows are very large and would take up too much space to notate in plain decimal digits. Despite the large values occurring in this early section of the table, some even larger numbers have been defined, such as Graham's number, which cannot be written with any small number of Knuth arrows. This number is constructed with a technique similar to applying the Ackermann function to itself recursively. This is a repeat of the above table, but with the values replaced by the relevant expression from the function definition to show the pattern clearly: Properties General remarks It may not be immediately obvious that the evaluation of always terminates. However, the recursion is bounded because in each recursive application either decreases, or remains the same and decreases. Each time that reaches zero, decreases, so eventually reaches zero as well. (Expressed more technically, in each case the pair decreases in the lexicographic order on pairs, which is a well-ordering, just like the ordering of single non-negative integers; this means one cannot go down in the ordering infinitely many times in succession.) However, when decreases there is no upper bound on how much can increase — and it will often increase greatly. For small values of m like 1, 2, or 3, the Ackermann function grows relatively slowly with respect to n (at most exponentially). For , however, it grows much more quickly; even is about 2.00353, and the decimal expansion of is very large by any typical measure, about 2.12004. An interesting aspect is that the only arithmetic operation it ever uses is addition of 1. Its fast growing power is based solely on nested recursion. This also implies that its running time is at least proportional to its output, and so is also extremely huge. In actuality, for most cases the running time is far larger than the output; see above. A single-argument version that increases both and at the same time dwarfs every primitive recursive function, including very fast-growing functions such as the exponential function, the factorial function, multi- and superfactorial functions, and even functions defined using Knuth's up-arrow notation (except when the indexed up-arrow is used). It can be seen that is roughly comparable to in the fast-growing hierarchy. This extreme growth can be exploited to show that which is obviously computable on a machine with infinite memory such as a Turing machine and so is a computable function, grows faster than any primitive recursive function and is therefore not primitive recursive. Not primitive recursive The Ackermann function grows faster than any primitive recursive function and therefore is not itself primitive recursive. The sketch of the proof is this: a primitive recursive function defined using up to k recursions must grow slower than , the (k+1)-th function in the fast-growing hierarchy, but the Ackermann function grows at least as fast as . Specifically, one shows that for every primitive recursive function there exists a non-negative integer such that for all non-negative integers , Once this is established, it follows that itself is not primitive recursive, since otherwise putting would lead to the contradiction The proof proceeds as follows: define the class of all functions that grow slower than the Ackermann function and show that contains all primitive recursive functions. The latter is achieved by showing that contains the constant functions, the successor function, the projection functions and that it is closed under the operations of function composition and primitive recursion. Inverse Since the function considered above grows very rapidly, its inverse function, f, grows very slowly. This inverse Ackermann function f−1 is usually denoted by α. In fact, α(n) is less than 5 for any practical input size n, since is on the order of . This inverse appears in the time complexity of some algorithms, such as the disjoint-set data structure and Chazelle's algorithm for minimum spanning trees. Sometimes Ackermann's original function or other variations are used in these settings, but they all grow at similarly high rates. In particular, some modified functions simplify the expression by eliminating the −3 and similar terms. A two-parameter variation of the inverse Ackermann function can be defined as follows, where is the floor function: This function arises in more precise analyses of the algorithms mentioned above, and gives a more refined time bound. In the disjoint-set data structure, m represents the number of operations while n represents the number of elements; in the minimum spanning tree algorithm, m represents the number of edges while n represents the number of vertices. Several slightly different definitions of exist; for example, is sometimes replaced by n, and the floor function is sometimes replaced by a ceiling. Other studies might define an inverse function of one where m is set to a constant, such that the inverse applies to a particular row. The inverse of the Ackermann function is primitive recursive. Usage In computational complexity The Ackermann function appears in the time complexity of some algorithms, such as vector addition systems and Petri net reachability, thus showing they are computationally infeasible for large instances. The inverse of the Ackermann function appears in some time complexity results. For instance, the disjoint-set data structure takes amortized time per operation proportional to the inverse Ackermann function, and cannot be made faster within the cell-probe model of computational complexity. In discrete geometry Certain problems in discrete geometry related to Davenport–Schinzel sequences have complexity bounds in which the inverse Ackermann function appears. For instance, for line segments in the plane, the unbounded face of the arrangement of the segments has complexity , and some systems of line segments have an unbounded face of complexity . As a benchmark The Ackermann function, due to its definition in terms of extremely deep recursion, can be used as a benchmark of a compiler's ability to optimize recursion. The first published use of Ackermann's function in this way was in 1970 by Dragoș Vaida and, almost simultaneously, in 1971, by Yngve Sundblad. Sundblad's seminal paper was taken up by Brian Wichmann (co-author of the Whetstone benchmark) in a trilogy of papers written between 1975 and 1982. See also Computability theory Double recursion Fast-growing hierarchy Goodstein function Primitive recursive function Recursion (computer science) Notes References Bibliography External links An animated Ackermann function calculator Ackermann functions. Includes a table of some values. describes several variations on the definition of A. The Ackermann function written in different programming languages, (on Rosetta Code) ) Some study and programming. Arithmetic Large integers Special functions Theory of computation Computability theory
Ackermann function
[ "Mathematics" ]
3,416
[ "Special functions", "Mathematical logic", "Combinatorics", "Arithmetic", "Computability theory", "Number theory" ]
2,943
https://en.wikipedia.org/wiki/Dual%20wield
Dual wielding is the technique of using two weapons, one in each hand, for training or combat. It is not a common combat practice. Although historical records of dual wielding in war are limited, there are numerous weapon-based martial arts that involve the use of a pair of weapons. The use of a companion weapon is sometimes employed in European martial arts and fencing, such as a parrying dagger. Miyamoto Musashi, a Japanese swordsman and ronin, was said to have conceived of the idea of a particular style of swordsmanship involving the use of two swords. In terms of firearms, especially handguns, dual wielding is generally denounced by firearm enthusiasts due to its impracticality. Though using two handguns at the same time confers an advantage by allowing more ready ammunition, it is rarely done due to other aspects of weapons handling. Dual wielding, both with melee and ranged weapons, has been popularized by fictional works (film, television, and video games). History Dual wielding has not been used or mentioned much in military history, though it appears in weapon-based martial arts and fencing practices. The dimachaerus was a type of Roman gladiator that fought with two swords. Thus, an inscription from Lyon, France, mentions such a type of gladiator, here spelled dymacherus. The dimachaeri were equipped for close-combat fighting. A dimachaerus used a pair of siccae (curved scimitar) or gladius and used a fighting style adapted to both attack and defend with his weapons rather than a shield, as he was not equipped with one. The use of weapon combinations in each hand has been mentioned for close combat in western Europe during the Byzantine, Medieval, and Renaissance era. The use of a parrying dagger such as a main gauche along with a rapier is common in historical European martial arts. North American Indian tribes of the Atlantic northeast used a form involving a tomahawk in the primary hand and a knife in the secondary. It is practiced today as part of the modern Cree martial art Okichitaw. All the above-mentioned examples, involve either one long and one short weapon, or two short weapons. An example of a dual wield of two sabres is the Ukrainian cossack dance hopak. Asia During the campaign Muslim conquest in 6th to 7th century AD, a Rashidun caliphate general named Khalid ibn Walid was reported to favor wielding two broad swords, with one in each hand, during combat. Traditional schools of Japanese martial arts include dual wield techniques, particularly a style conceived by Miyamoto Musashi involving the katana and wakizashi, two-sword kenjutsu techniques he called Niten Ichi-ryū. Eskrima, the traditional martial arts of the Philippines teaches Doble Baston techniques involving the basic use of a pair of rattan sticks and also Espada y daga or Sword/Stick and Dagger. Okinawan martial arts have a method that uses a pair of sai. Chinese martial arts involve the use of a pair of butterfly swords and hook swords. Famed for his enormous strength, Dian Wei, a military general serving under the warlord Cao Cao in the late Eastern Han dynasty of China, excelled at wielding a pair of ji (a halberd-like weapon), each of which was said to weigh 40 jin. Chen An, a warlord who lived during the Jin dynasty (266–420) and Sixteen Kingdoms period, wielded a sword and a serpent spear in each hand, supposedly measuring at 7 chi and 1 zhang 8 chi respectively. During Ran Wei–Later Zhao war, Ran Min, emperor of the short-lived Ran Wei empire of China, wielded two weapons, one in each hand, and fought fiercely, inflicting many casualties on the Xianbei soldiers while mounted on the famous horse Zhu Long ("Red Dragon"). Gatka, a weapon-based martial art from the Punjab region, is known to use two sticks at a time. The Thailand weapon-based martial art Krabi Krabong involves the use of a separate Krabi in each hand. Kalaripayattu teaches advanced students to use either two sticks (of various sizes) or two daggers or two swords, simultaneously. Modern The use of a gun in each hand is often associated with the American Old West, mainly due to media portrayals. It was common for people in the era to carry two guns, but not to use them at the same time, as shown in movies. The second gun served as a backup weapon, to be used only if the main one suffered a malfunction or was lost or emptied. However, there were several examples of gunmen in the West who actually used two pistols at the same time in their gunfights: John Wesley Hardin killed a gunman named Benjamin Bradley who shot at him, by drawing both of his pistols and firing back. The Mexican vaquero Augustine Chacon had several gunfights in which he was outnumbered by more than one gunman and prevailed by equipping himself with a revolver in each hand. King Fisher once managed to kill three bandits in a shootout by pulling both of his pistols. During the infamous Four Dead in Five Seconds Gunfight, lawman Dallas Stoudenmire pulled both of his pistols as he ran out onto the street and killed one bystander and two other gunmen. Jonathan R. Davis, a prospector during the California Gold Rush, was ambushed by thirteen outlaws while together with two of his comrades. One of his friends was killed and the other was mortally wounded during the ambush. Davis drew both of his revolvers and fired, killing seven of the bandits, and killing four more with his bowie knife, causing the final two to flee. Dual wielding two handguns has been popularized by film and television. Effectiveness MythBusters compared many firing stances, including having a gun in each hand, and found that, compared to the two-handed single-gun stance as a benchmark, only the one-handed shoulder-level stance with a single gun was comparable in terms of accuracy and speed. The ability to look down the sights of the gun was given as the main reason for this. In an episode the following year, they compared holding two guns and firing simultaneously—rather than alternating left and right shots—with holding one gun in the two-handed stance, and found that the results were in favor of using two guns and firing simultaneously. In media The Teenage Mutant Ninja Turtles features dual wielding being done by Leonardo with two katana swords, Raphael with two sais, and Michelangelo with two nunchucks. Sometimes, their arch enemy known as the Shredder dual wields with many weapons. Princess Mononoke features Lady Eboshi dual wielding with a katana sword and a hairpin. The Marvel Comics features dual wielding being done by Deadpool with two katana swords, Nightcrawler with two sabres, Elektra with two sais, and Black Widow with two pistols and two batons. The DC Comics features Dick Grayson and Barbara Gordon dual wielding two bastons. The Star Wars franchise features many characters dual wielding two lightsabers or more including Anakin Skywalker, Ahsoka Tano, and General Grievous. Star Wars: The Clone Wars features Palpatine and his former apprentice, Darth Maul, dual wielding two lightsabers each. Also, characters dual wielding two blaster pistols include Jango Fett and Bo-Katan Kryze. The Halo franchise allows dual-wielding weapons from Halo 2 and Halo 3 onwards. The Chronicles of Narnia: The Lion, the Witch and the Wardrobe features the noble centaur general Oreius dual wielding two longswords, and also the oppressive White Witch doing the same. It also features the Minotaur general Otmin dual wielding a falchion sword and a battle axe. Ip Man 3 features butterfly swords being dual wielded by Ip Man and Cheung Tin-chi. The Hobbit and The Lord of the Rings features the virtuous wizard Gandalf dual wielding a magic staff and a mystic longsword. The Mummy Returns features the adventurous Egyptologist Evelyn O'Connell and the treacherous Anck-su-namun dual wielding two sais. The Pirates of the Caribbean features characters dual wielding two swords including Jack Sparrow, Will Turner, and Elizabeth Swann. The martial arts movie Crouching Tiger, Hidden Dragon features Michelle Yeoh as Yu Shu Lien dual wielding with a dao sword which split to two, and then with two hook swords. The Three Musketeers features many characters dual fighting with rapiers and daggers. Mighty Morphin Power Rangers features Tommy Oliver dual wielding a sword and a dagger. Robin of Sherwood features Nasir, a Saracen assassin who dual wields two scimitars. Avatar: The Legend of Aang features dual wielding done by Zuko with two dao swords, Jet with two hook swords, Suki with two war fans, and Sokka with a machete along a club or a boomerang. The Transformers features dual wielding being done by many characters including Optimus Prime and Optimus Primal with two swords. Kung Fu Hustle features iron rings being dual wielded by the humble tailor of Pigsty Alley. Power Rangers: Jungle Fury features dual wielding being done by Casey Rhodes with two nunchakus and also two dao-themed Shark Sabres, Theo Martin with two tonfas and then two tessan-themed Jungle Fans, and Camille with two sais. In the Marvel Cinematic Universe (MCU) martial arts film Shang Chi and the Legend of the Ten Rings features the Ten Rings be dual wielded by Wenwu, the MCU version of the Mandarin, and then by Shang Chi, his son. The musical version of The Lion King features Mufasa and his son Simba dual wielding two akrafena swords to fight. Lara Croft, the heroine of the Tomb Raider franchise, dual wields two pistols. Dante, the protagonist of the Devil May Cry series, dual wields two pistols, named Ebony and Ivory. Kirito, the protagonist of Sword Art Online'', is famous for being able to wield two swords of a similar length at the same time. See also Ambidexterity Cross-dominance Dimachaerus Gun fu Swordsmanship References Combat Video game terminology
Dual wield
[ "Technology" ]
2,131
[ "Computing terminology", "Video game terminology" ]
2,948
https://en.wikipedia.org/wiki/Agner%20Krarup%20Erlang
Agner Krarup Erlang (1 January 1878 – 3 February 1929) was a Danish mathematician, statistician and engineer, who invented the fields of traffic engineering and queueing theory. Erlang's 1909 paper, and subsequent papers over the decades, are regarded as containing some of most important concepts and techniques for queueing theory. By the time of his relatively early death at the age of 51, Erlang had created the field of telephone networks analysis. His early work in scrutinizing the use of local, exchange and trunk telephone line usage in a small community to understand the theoretical requirements of an efficient network led to the creation of the Erlang formula, which became a foundational element of modern telecommunications network studies. Life Erlang was born at Lønborg, near Tarm, in Jutland. He was the son of a schoolmaster, and a descendant of Thomas Fincke on his mother's side. At age 14, he passed the Preliminary Examination of the University of Copenhagen with distinction, after receiving dispensation to take it because he was younger than the usual minimum age. For the next two years he taught alongside his father. A distant relative provided free board and lodging, and Erlang prepared for and took the University of Copenhagen entrance examination in 1896, and passed with distinction. He won a scholarship to the university and majored in mathematics, and also studied astronomy, physics and chemistry. He graduated in 1901 with an MA and over the next 7 years taught at several schools. He maintained his interest in mathematics, and received an award for a paper that he submitted to the University of Copenhagen. He was a member of the Danish Mathematicians' Association (DMF) and through this met amateur mathematician Johan Jensen, the Chief Engineer of the Copenhagen Telephone Company (KTAS in Danish), an offshoot of the International Bell Telephone Company. Erlang worked for the Copenhagen Telephone Company from 1908 for almost 20 years, until his death in Copenhagen after an abdominal operation. He was an associate of the British Institution of Electrical Engineers. Contributions While working for the CTC, Erlang was presented with the classic problem of determining how many circuits were needed to provide an acceptable telephone service. His thinking went further by finding how many telephone operators were needed to handle a given volume of calls. Most telephone exchanges then used human operators and cord boards to switch telephone calls by means of jack plugs. Out of necessity, Erlang was a hands-on researcher. He would conduct measurements and was prepared to climb into street manholes to do so. He was also an expert in the history and calculation of the numerical tables of mathematical functions, particularly logarithms. He devised new calculation methods for certain forms of tables. He developed his theory of telephone traffic over several years. His significant publications include: 1909 – "The Theory of Probabilities and Telephone Conversations", which proves that the Poisson distribution applies to random telephone traffic. 1917 – "Solution of some Problems in the Theory of Probabilities of Significance in Automatic Telephone Exchanges", which contains his classic formulae for call loss and waiting time. 1920 – "Telephone waiting times", which is Erlang's principal work on waiting times, assuming constant holding times. These and other notable papers were translated into English, French and German. His papers were prepared in a very brief style and can be difficult to understand without a background in the field. One Bell Telephone Laboratories researcher is said to have learned Danish to study them. The British Post Office accepted his formula as the basis for calculating circuit facilities. In 1946, the CCITT named the international unit of telephone traffic the "erlang". A statistical distribution and programming language listed below have also been named in his honour. Erlang also made an important contribution to physiologic modeling with the Krogh-Erlang capillary cylinder model describing oxygen supply to living tissue. See also Erlang – a unit of communication activity Erlang distribution – a statistical probability distribution Erlang programming language – developed by Ericsson for large industrial real-time systems Queueing theory Teletraffic engineering References 20th-century Danish mathematicians 20th-century Danish engineers Electrical engineers Queueing theorists Danish statisticians Danish business theorists 1878 births 1929 deaths People from Ringkøbing-Skjern Municipality Danish civil engineers University of Copenhagen alumni
Agner Krarup Erlang
[ "Engineering" ]
880
[ "Electrical engineering", "Electrical engineers" ]
2,955
https://en.wikipedia.org/wiki/Alkali
In chemistry, an alkali (; from the Arabic word , ) is a basic, ionic salt of an alkali metal or an alkaline earth metal. An alkali can also be defined as a base that dissolves in water. A solution of a soluble base has a pH greater than 7.0. The adjective alkaline, and less often, alkalescent, is commonly used in English as a synonym for basic, especially for bases soluble in water. This broad use of the term is likely to have come about because alkalis were the first bases known to obey the Arrhenius definition of a base, and they are still among the most common bases. Etymology The word alkali is derived from Arabic al qalīy (or alkali), meaning (see calcination), referring to the original source of alkaline substances. A water-extract of burned plant ashes, called potash and composed mostly of potassium carbonate, was mildly basic. After heating this substance with calcium hydroxide (slaked lime), a far more strongly basic substance known as caustic potash (potassium hydroxide) was produced. Caustic potash was traditionally used in conjunction with animal fats to produce soft soaps, one of the caustic processes that rendered soaps from fats in the process of saponification, one known since antiquity. Plant potash lent the name to the element potassium, which was first derived from caustic potash, and also gave potassium its chemical symbol K (from the German name ), which ultimately derived from alkali. Common properties of alkalis and bases Alkalis are all Arrhenius bases, ones which form hydroxide ions (OH−) when dissolved in water. Common properties of alkaline aqueous solutions include: Moderately concentrated solutions (over 10−3 M) have a pH of 10 or greater. This means that they will turn phenolphthalein from colorless to pink. Concentrated solutions are caustic (causing chemical burns). Alkaline solutions are slippery or soapy to the touch, due to the saponification of the fatty substances on the surface of the skin. Alkalis are normally water-soluble, although some like barium carbonate are only soluble when reacting with an acidic aqueous solution. Difference between alkali and base The terms "base" and "alkali" are often used interchangeably, particularly outside the context of chemistry and chemical engineering. There are various, more specific definitions for the concept of an alkali. Alkalis are usually defined as a subset of the bases. One of two subsets is commonly chosen. A basic salt of an alkali metal or alkaline earth metal (this includes Mg(OH)2 (magnesium hydroxide) but excludes NH3 (ammonia)). Any base that is soluble in water and forms hydroxide ions or the solution of a base in water. (This includes both Mg(OH)2 and NH3, which forms NH4OH.) The second subset of bases is also called an "Arrhenius base". Alkali salts Alkali salts are soluble hydroxides of alkali metals and alkaline earth metals, of which common examples are: Sodium hydroxide (NaOH) – often called "caustic soda" Potassium hydroxide (KOH) – commonly called "caustic potash" Lye – generic term for either of two previous salts or their mixture Calcium hydroxide (Ca(OH)2) – saturated solution known as "limewater" Magnesium hydroxide (Mg(OH)2) – an atypical alkali since it has low solubility in water (although the dissolved portion is considered a strong base due to complete dissociation of its ions) Alkaline soil Soils with pH values that are higher than 7.3 are usually defined as being alkaline. These soils can occur naturally due to the presence of alkali salts. Although many plants do prefer slightly basic soil (including vegetables like cabbage and fodder like buffalo grass), most plants prefer mildly acidic soil (with pHs between 6.0 and 6.8), and alkaline soils can cause problems. Alkali lakes In alkali lakes (also called soda lakes), evaporation concentrates the naturally occurring carbonate salts, giving rise to an alkalic and often saline lake. Examples of alkali lakes: Alkali Lake, Lake County, Oregon Baldwin Lake, San Bernardino County, California Bear Lake on the Utah–Idaho border Lake Magadi in Kenya Lake Turkana in Kenya Mono Lake, near Owens Valley in California Redberry Lake, Saskatchewan Summer Lake, Lake County, Oregon Tramping Lake, Saskatchewan See also Alkali metals Alkaline earth metals Alkaline magma series Base (chemistry) References Inorganic chemistry
Alkali
[ "Chemistry" ]
999
[ "nan" ]
2,961
https://en.wikipedia.org/wiki/Convex%20uniform%20honeycomb
In geometry, a convex uniform honeycomb is a uniform tessellation which fills three-dimensional Euclidean space with non-overlapping convex uniform polyhedral cells. Twenty-eight such honeycombs are known: the familiar cubic honeycomb and 7 truncations thereof; the alternated cubic honeycomb and 4 truncations thereof; 10 prismatic forms based on the uniform plane tilings (11 if including the cubic honeycomb); 5 modifications of some of the above by elongation and/or gyration. They can be considered the three-dimensional analogue to the uniform tilings of the plane. The Voronoi diagram of any lattice forms a convex uniform honeycomb in which the cells are zonohedra. History 1900: Thorold Gosset enumerated the list of semiregular convex polytopes with regular cells (Platonic solids) in his publication On the Regular and Semi-Regular Figures in Space of n Dimensions, including one regular cubic honeycomb, and two semiregular forms with tetrahedra and octahedra. 1905: Alfredo Andreini enumerated 25 of these tessellations. 1991: Norman Johnson's manuscript Uniform Polytopes identified the list of 28. 1994: Branko Grünbaum, in his paper Uniform tilings of 3-space, also independently enumerated all 28, after discovering errors in Andreini's publication. He found the 1905 paper, which listed 25, had 1 wrong, and 4 being missing. Grünbaum states in this paper that Norman Johnson deserves priority for achieving the same enumeration in 1991. He also mentions that I. Alexeyev of Russia had contacted him regarding a putative enumeration of these forms, but that Grünbaum was unable to verify this at the time. 2006: George Olshevsky, in his manuscript Uniform Panoploid Tetracombs, along with repeating the derived list of 11 convex uniform tilings, and 28 convex uniform honeycombs, expands a further derived list of 143 convex uniform tetracombs (Honeycombs of uniform 4-polytopes in 4-space). Only 14 of the convex uniform polyhedra appear in these patterns: three of the five Platonic solids (the tetrahedron, cube, and octahedron), six of the thirteen Archimedean solids (the ones with reflective tetrahedral or octahedral symmetry), and five of the infinite family of prisms (the 3-, 4-, 6-, 8-, and 12-gonal ones; the 4-gonal prism duplicates the cube). The icosahedron, snub cube, and square antiprism appear in some alternations, but those honeycombs cannot be realised with all edges unit length. Names This set can be called the regular and semiregular honeycombs. It has been called the Archimedean honeycombs by analogy with the convex uniform (non-regular) polyhedra, commonly called Archimedean solids. Recently Conway has suggested naming the set as the Architectonic tessellations and the dual honeycombs as the Catoptric tessellations. The individual honeycombs are listed with names given to them by Norman Johnson. (Some of the terms used below are defined in Uniform 4-polytope#Geometric derivations for 46 nonprismatic Wythoffian uniform 4-polytopes) For cross-referencing, they are given with list indices from Andreini (1-22), Williams(1–2,9-19), Johnson (11–19, 21–25, 31–34, 41–49, 51–52, 61–65), and Grünbaum(1-28). Coxeter uses δ4 for a cubic honeycomb, hδ4 for an alternated cubic honeycomb, qδ4 for a quarter cubic honeycomb, with subscripts for other forms based on the ring patterns of the Coxeter diagram. Compact Euclidean uniform tessellations (by their infinite Coxeter group families) The fundamental infinite Coxeter groups for 3-space are: The , [4,3,4], cubic, (8 unique forms plus one alternation) The , [4,31,1], alternated cubic, (11 forms, 3 new) The cyclic group, [(3,3,3,3)] or [3[4]], (5 forms, one new) There is a correspondence between all three families. Removing one mirror from produces , and removing one mirror from produces . This allows multiple constructions of the same honeycombs. If cells are colored based on unique positions within each Wythoff construction, these different symmetries can be shown. In addition there are 5 special honeycombs which don't have pure reflectional symmetry and are constructed from reflectional forms with elongation and gyration operations. The total unique honeycombs above are 18. The prismatic stacks from infinite Coxeter groups for 3-space are: The ×, [4,4,2,∞] prismatic group, (2 new forms) The ×, [6,3,2,∞] prismatic group, (7 unique forms) The ×, [(3,3,3),2,∞] prismatic group, (No new forms) The ××, [∞,2,∞,2,∞] prismatic group, (These all become a cubic honeycomb) In addition there is one special elongated form of the triangular prismatic honeycomb. The total unique prismatic honeycombs above (excluding the cubic counted previously) are 10. Combining these counts, 18 and 10 gives us the total 28 uniform honeycombs. The C̃3, [4,3,4] group (cubic) The regular cubic honeycomb, represented by Schläfli symbol {4,3,4}, offers seven unique derived uniform honeycombs via truncation operations. (One redundant form, the runcinated cubic honeycomb, is included for completeness though identical to the cubic honeycomb.) The reflectional symmetry is the affine Coxeter group [4,3,4]. There are four index 2 subgroups that generate alternations: [1+,4,3,4], [(4,3,4,2+)], [4,3+,4], and [4,3,4]+, with the first two generated repeated forms, and the last two are nonuniform. B̃3, [4,31,1] group The , [4,3] group offers 11 derived forms via truncation operations, four being unique uniform honeycombs. There are 3 index 2 subgroups that generate alternations: [1+,4,31,1], [4,(31,1)+], and [4,31,1]+. The first generates repeated honeycomb, and the last two are nonuniform but included for completeness. The honeycombs from this group are called alternated cubic because the first form can be seen as a cubic honeycomb with alternate vertices removed, reducing cubic cells to tetrahedra and creating octahedron cells in the gaps. Nodes are indexed left to right as 0,1,0',3 with 0' being below and interchangeable with 0. The alternate cubic names given are based on this ordering. Ã3, [3[4]] group There are 5 forms constructed from the , [3[4]] Coxeter group, of which only the quarter cubic honeycomb is unique. There is one index 2 subgroup [3[4]]+ which generates the snub form, which is not uniform, but included for completeness. Nonwythoffian forms (gyrated and elongated) Three more uniform honeycombs are generated by breaking one or another of the above honeycombs where its faces form a continuous plane, then rotating alternate layers by 60 or 90 degrees (gyration) and/or inserting a layer of prisms (elongation). The elongated and gyroelongated alternated cubic tilings have the same vertex figure, but are not alike. In the elongated form, each prism meets a tetrahedron at one triangular end and an octahedron at the other. In the gyroelongated form, prisms that meet tetrahedra at both ends alternate with prisms that meet octahedra at both ends. The gyroelongated triangular prismatic tiling has the same vertex figure as one of the plain prismatic tilings; the two may be derived from the gyrated and plain triangular prismatic tilings, respectively, by inserting layers of cubes. Prismatic stacks Eleven prismatic tilings are obtained by stacking the eleven uniform plane tilings, shown below, in parallel layers. (One of these honeycombs is the cubic, shown above.) The vertex figure of each is an irregular bipyramid whose faces are isosceles triangles. The C̃2×Ĩ1(∞), [4,4,2,∞], prismatic group There are only 3 unique honeycombs from the square tiling, but all 6 tiling truncations are listed below for completeness, and tiling images are shown by colors corresponding to each form. The G̃2xĨ1(∞), [6,3,2,∞] prismatic group Enumeration of Wythoff forms All nonprismatic Wythoff constructions by Coxeter groups are given below, along with their alternations. Uniform solutions are indexed with Branko Grünbaum's listing. Green backgrounds are shown on repeated honeycombs, with the relations are expressed in the extended symmetry diagrams. Examples The alternated cubic honeycomb is of special importance since its vertices form a cubic close-packing of spheres. The space-filling truss of packed octahedra and tetrahedra was apparently first discovered by Alexander Graham Bell and independently re-discovered by Buckminster Fuller (who called it the octet truss and patented it in the 1940s). . Octet trusses are now among the most common types of truss used in construction. Frieze forms If cells are allowed to be uniform tilings, more uniform honeycombs can be defined: Families: ×: [4,4,2] Cubic slab honeycombs (3 forms) ×: [6,3,2] Tri-hexagonal slab honeycombs (8 forms) ×: [(3,3,3),2] Triangular slab honeycombs (No new forms) ××: [∞,2,2] = Cubic column honeycombs (1 form) ×: [p,2,∞] Polygonal column honeycombs (analogous to duoprisms: these look like a single infinite tower of p-gonal prisms, with the remaining space filled with apeirogonal prisms) ××: [∞,2,∞,2] = [4,4,2] - = (Same as cubic slab honeycomb family) The first two forms shown above are semiregular (uniform with only regular facets), and were listed by Thorold Gosset in 1900 respectively as the 3-ic semi-check and tetroctahedric semi-check. Scaliform honeycomb A scaliform honeycomb is vertex-transitive, like a uniform honeycomb, with regular polygon faces while cells and higher elements are only required to be orbiforms, equilateral, with their vertices lying on hyperspheres. For 3D honeycombs, this allows a subset of Johnson solids along with the uniform polyhedra. Some scaliforms can be generated by an alternation process, leaving, for example, pyramid and cupola gaps. Hyperbolic forms There are 9 Coxeter group families of compact uniform honeycombs in hyperbolic 3-space, generated as Wythoff constructions, and represented by ring permutations of the Coxeter-Dynkin diagrams for each family. From these 9 families, there are a total of 76 unique honeycombs generated: [3,5,3] : - 9 forms [5,3,4] : - 15 forms [5,3,5] : - 9 forms [5,31,1] : - 11 forms (7 overlap with [5,3,4] family, 4 are unique) [(4,3,3,3)] : - 9 forms [(4,3,4,3)] : - 6 forms [(5,3,3,3)] : - 9 forms [(5,3,4,3)] : - 9 forms [(5,3,5,3)] : - 6 forms Several non-Wythoffian forms outside the list of 76 are known; it is not known how many there are. Paracompact hyperbolic forms There are also 23 paracompact Coxeter groups of rank 4. These families can produce uniform honeycombs with unbounded facets or vertex figure, including ideal vertices at infinity: References John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, (2008) The Symmetries of Things, (Chapter 21, Naming the Archimedean and Catalan polyhedra and tilings, Architectonic and Catoptric tessellations, p 292–298, includes all the nonprismatic forms) Branko Grünbaum, (1994) Uniform tilings of 3-space. Geombinatorics 4, 49 - 56. Norman Johnson (1991) Uniform Polytopes, Manuscript (Chapter 5: Polyhedra packing and space filling) Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995, (Paper 22) H.S.M. Coxeter, Regular and Semi Regular Polytopes I, [Math. Zeit. 46 (1940) 380–407, MR 2,10] (1.9 Uniform space-fillings) A. Andreini, (1905) Sulle reti di poliedri regolari e semiregolari e sulle corrispondenti reti correlative (On the regular and semiregular nets of polyhedra and on the corresponding correlative nets), Mem. Società Italiana della Scienze, Ser.3, 14 75–129. PDF D. M. Y. Sommerville, (1930) An Introduction to the Geometry of n Dimensions. New York, E. P. Dutton, . 196 pp. (Dover Publications edition, 1958) Chapter X: The Regular Polytopes Chapter 5. Joining polyhedra Crystallography of Quasicrystals: Concepts, Methods and Structures by Walter Steurer, Sofia Deloudi (2009), p. 54-55. 12 packings of 2 or more uniform polyhedra with cubic symmetry External links Uniform Honeycombs in 3-Space VRML models Elementary Honeycombs Vertex transitive space filling honeycombs with non-uniform cells. Uniform partitions of 3-space, their relatives and embedding, 1999 The Uniform Polyhedra Virtual Reality Polyhedra The Encyclopedia of Polyhedra octet truss animation Review: A. F. Wells, Three-dimensional nets and polyhedra, H. S. M. Coxeter (Source: Bull. Amer. Math. Soc. Volume 84, Number 3 (1978), 466-470.) Honeycombs (geometry)
Convex uniform honeycomb
[ "Physics", "Chemistry", "Materials_science" ]
3,284
[ "Tessellation", "Crystallography", "Honeycombs (geometry)", "Symmetry" ]
2,974
https://en.wikipedia.org/wiki/Abelian%20group
In mathematics, an abelian group, also called a commutative group, is a group in which the result of applying the group operation to two group elements does not depend on the order in which they are written. That is, the group operation is commutative. With addition as an operation, the integers and the real numbers form abelian groups, and the concept of an abelian group may be viewed as a generalization of these examples. Abelian groups are named after Niels Henrik Abel. The concept of an abelian group underlies many fundamental algebraic structures, such as fields, rings, vector spaces, and algebras. The theory of abelian groups is generally simpler than that of their non-abelian counterparts, and finite abelian groups are very well understood and fully classified. Definition An abelian group is a set , together with an operation ・ , that combines any two elements and of to form another element of denoted . The symbol ・ is a general placeholder for a concretely given operation. To qualify as an abelian group, the set and operation, , must satisfy four requirements known as the abelian group axioms (some authors include in the axioms some properties that belong to the definition of an operation: namely that the operation is defined for any ordered pair of elements of , that the result is well-defined, and that the result belongs to ): Associativity For all , , and in , the equation holds. Identity element There exists an element in , such that for all elements in , the equation holds. Inverse element For each in there exists an element in such that , where is the identity element. Commutativity For all , in , . A group in which the group operation is not commutative is called a "non-abelian group" or "non-commutative group". Facts Notation There are two main notational conventions for abelian groups – additive and multiplicative. Generally, the multiplicative notation is the usual notation for groups, while the additive notation is the usual notation for modules and rings. The additive notation may also be used to emphasize that a particular group is abelian, whenever both abelian and non-abelian groups are considered, some notable exceptions being near-rings and partially ordered groups, where an operation is written additively even when non-abelian. Multiplication table To verify that a finite group is abelian, a table (matrix) – known as a Cayley table – can be constructed in a similar fashion to a multiplication table. If the group is under the the entry of this table contains the product . The group is abelian if and only if this table is symmetric about the main diagonal. This is true since the group is abelian iff for all , which is iff the entry of the table equals the entry for all , i.e. the table is symmetric about the main diagonal. Examples For the integers and the operation addition , denoted , the operation + combines any two integers to form a third integer, addition is associative, zero is the additive identity, every integer has an additive inverse, , and the addition operation is commutative since for any two integers and . Every cyclic group is abelian, because if , are in , then . Thus the integers, , form an abelian group under addition, as do the integers modulo , . Every ring is an abelian group with respect to its addition operation. In a commutative ring the invertible elements, or units, form an abelian multiplicative group. In particular, the real numbers are an abelian group under addition, and the nonzero real numbers are an abelian group under multiplication. Every subgroup of an abelian group is normal, so each subgroup gives rise to a quotient group. Subgroups, quotients, and direct sums of abelian groups are again abelian. The finite simple abelian groups are exactly the cyclic groups of prime order. The concepts of abelian group and -module agree. More specifically, every -module is an abelian group with its operation of addition, and every abelian group is a module over the ring of integers in a unique way. In general, matrices, even invertible matrices, do not form an abelian group under multiplication because matrix multiplication is generally not commutative. However, some groups of matrices are abelian groups under matrix multiplication – one example is the group of rotation matrices. Historical remarks Camille Jordan named abelian groups after Norwegian mathematician Niels Henrik Abel, as Abel had found that the commutativity of the group of a polynomial implies that the roots of the polynomial can be calculated by using radicals. Properties If is a natural number and is an element of an abelian group written additively, then can be defined as ( summands) and . In this way, becomes a module over the ring of integers. In fact, the modules over can be identified with the abelian groups. Theorems about abelian groups (i.e. modules over the principal ideal domain ) can often be generalized to theorems about modules over an arbitrary principal ideal domain. A typical example is the classification of finitely generated abelian groups which is a specialization of the structure theorem for finitely generated modules over a principal ideal domain. In the case of finitely generated abelian groups, this theorem guarantees that an abelian group splits as a direct sum of a torsion group and a free abelian group. The former may be written as a direct sum of finitely many groups of the form for prime, and the latter is a direct sum of finitely many copies of . If are two group homomorphisms between abelian groups, then their sum , defined by , is again a homomorphism. (This is not true if is a non-abelian group.) The set of all group homomorphisms from to is therefore an abelian group in its own right. Somewhat akin to the dimension of vector spaces, every abelian group has a rank. It is defined as the maximal cardinality of a set of linearly independent (over the integers) elements of the group. Finite abelian groups and torsion groups have rank zero, and every abelian group of rank zero is a torsion group. The integers and the rational numbers have rank one, as well as every nonzero additive subgroup of the rationals. On the other hand, the multiplicative group of the nonzero rationals has an infinite rank, as it is a free abelian group with the set of the prime numbers as a basis (this results from the fundamental theorem of arithmetic). The center of a group is the set of elements that commute with every element of . A group is abelian if and only if it is equal to its center . The center of a group is always a characteristic abelian subgroup of . If the quotient group of a group by its center is cyclic then is abelian. Finite abelian groups Cyclic groups of integers modulo , , were among the first examples of groups. It turns out that an arbitrary finite abelian group is isomorphic to a direct sum of finite cyclic groups of prime power order, and these orders are uniquely determined, forming a complete system of invariants. The automorphism group of a finite abelian group can be described directly in terms of these invariants. The theory had been first developed in the 1879 paper of Georg Frobenius and Ludwig Stickelberger and later was both simplified and generalized to finitely generated modules over a principal ideal domain, forming an important chapter of linear algebra. Any group of prime order is isomorphic to a cyclic group and therefore abelian. Any group whose order is a square of a prime number is also abelian. In fact, for every prime number there are (up to isomorphism) exactly two groups of order , namely and . Classification The fundamental theorem of finite abelian groups states that every finite abelian group can be expressed as the direct sum of cyclic subgroups of prime-power order; it is also known as the basis theorem for finite abelian groups. Moreover, automorphism groups of cyclic groups are examples of abelian groups. This is generalized by the fundamental theorem of finitely generated abelian groups, with finite groups being the special case when G has zero rank; this in turn admits numerous further generalizations. The classification was proven by Leopold Kronecker in 1870, though it was not stated in modern group-theoretic terms until later, and was preceded by a similar classification of quadratic forms by Carl Friedrich Gauss in 1801; see history for details. The cyclic group of order is isomorphic to the direct sum of and if and only if and are coprime. It follows that any finite abelian group is isomorphic to a direct sum of the form in either of the following canonical ways: the numbers are powers of (not necessarily distinct) primes, or divides , which divides , and so on up to . For example, can be expressed as the direct sum of two cyclic subgroups of order 3 and 5: . The same can be said for any abelian group of order 15, leading to the remarkable conclusion that all abelian groups of order 15 are isomorphic. For another example, every abelian group of order 8 is isomorphic to either (the integers 0 to 7 under addition modulo 8), (the odd integers 1 to 15 under multiplication modulo 16), or . See also list of small groups for finite abelian groups of order 30 or less. Automorphisms One can apply the fundamental theorem to count (and sometimes determine) the automorphisms of a given finite abelian group . To do this, one uses the fact that if splits as a direct sum of subgroups of coprime order, then Given this, the fundamental theorem shows that to compute the automorphism group of it suffices to compute the automorphism groups of the Sylow -subgroups separately (that is, all direct sums of cyclic subgroups, each with order a power of ). Fix a prime and suppose the exponents of the cyclic factors of the Sylow -subgroup are arranged in increasing order: for some . One needs to find the automorphisms of One special case is when , so that there is only one cyclic prime-power factor in the Sylow -subgroup . In this case the theory of automorphisms of a finite cyclic group can be used. Another special case is when is arbitrary but for . Here, one is considering to be of the form so elements of this subgroup can be viewed as comprising a vector space of dimension over the finite field of elements . The automorphisms of this subgroup are therefore given by the invertible linear transformations, so where is the appropriate general linear group. This is easily shown to have order In the most general case, where the and are arbitrary, the automorphism group is more difficult to determine. It is known, however, that if one defines and then one has in particular , , and One can check that this yields the orders in the previous examples as special cases (see Hillar & Rhea). Finitely generated abelian groups An abelian group is finitely generated if it contains a finite set of elements (called generators) such that every element of the group is a linear combination with integer coefficients of elements of . Let be a free abelian group with basis There is a unique group homomorphism such that This homomorphism is surjective, and its kernel is finitely generated (since integers form a Noetherian ring). Consider the matrix with integer entries, such that the entries of its th column are the coefficients of the th generator of the kernel. Then, the abelian group is isomorphic to the cokernel of linear map defined by . Conversely every integer matrix defines a finitely generated abelian group. It follows that the study of finitely generated abelian groups is totally equivalent with the study of integer matrices. In particular, changing the generating set of is equivalent with multiplying on the left by a unimodular matrix (that is, an invertible integer matrix whose inverse is also an integer matrix). Changing the generating set of the kernel of is equivalent with multiplying on the right by a unimodular matrix. The Smith normal form of is a matrix where and are unimodular, and is a matrix such that all non-diagonal entries are zero, the non-zero diagonal entries are the first ones, and is a divisor of for . The existence and the shape of the Smith normal form proves that the finitely generated abelian group is the direct sum where is the number of zero rows at the bottom of (and also the rank of the group). This is the fundamental theorem of finitely generated abelian groups. The existence of algorithms for Smith normal form shows that the fundamental theorem of finitely generated abelian groups is not only a theorem of abstract existence, but provides a way for computing expression of finitely generated abelian groups as direct sums. Infinite abelian groups The simplest infinite abelian group is the infinite cyclic group . Any finitely generated abelian group is isomorphic to the direct sum of copies of and a finite abelian group, which in turn is decomposable into a direct sum of finitely many cyclic groups of prime power orders. Even though the decomposition is not unique, the number , called the rank of , and the prime powers giving the orders of finite cyclic summands are uniquely determined. By contrast, classification of general infinitely generated abelian groups is far from complete. Divisible groups, i.e. abelian groups in which the equation admits a solution for any natural number and element of , constitute one important class of infinite abelian groups that can be completely characterized. Every divisible group is isomorphic to a direct sum, with summands isomorphic to and Prüfer groups for various prime numbers , and the cardinality of the set of summands of each type is uniquely determined. Moreover, if a divisible group is a subgroup of an abelian group then admits a direct complement: a subgroup of such that . Thus divisible groups are injective modules in the category of abelian groups, and conversely, every injective abelian group is divisible (Baer's criterion). An abelian group without non-zero divisible subgroups is called reduced. Two important special classes of infinite abelian groups with diametrically opposite properties are torsion groups and torsion-free groups, exemplified by the groups (periodic) and (torsion-free). Torsion groups An abelian group is called periodic or torsion, if every element has finite order. A direct sum of finite cyclic groups is periodic. Although the converse statement is not true in general, some special cases are known. The first and second Prüfer theorems state that if is a periodic group, and it either has a bounded exponent, i.e., for some natural number , or is countable and the -heights of the elements of are finite for each , then is isomorphic to a direct sum of finite cyclic groups. The cardinality of the set of direct summands isomorphic to in such a decomposition is an invariant of . These theorems were later subsumed in the Kulikov criterion. In a different direction, Helmut Ulm found an extension of the second Prüfer theorem to countable abelian -groups with elements of infinite height: those groups are completely classified by means of their Ulm invariants. Torsion-free and mixed groups An abelian group is called torsion-free if every non-zero element has infinite order. Several classes of torsion-free abelian groups have been studied extensively: Free abelian groups, i.e. arbitrary direct sums of Cotorsion and algebraically compact torsion-free groups such as the -adic integers Slender groups An abelian group that is neither periodic nor torsion-free is called mixed. If is an abelian group and is its torsion subgroup, then the factor group is torsion-free. However, in general the torsion subgroup is not a direct summand of , so is not isomorphic to . Thus the theory of mixed groups involves more than simply combining the results about periodic and torsion-free groups. The additive group of integers is torsion-free -module. Invariants and classification One of the most basic invariants of an infinite abelian group is its rank: the cardinality of the maximal linearly independent subset of . Abelian groups of rank 0 are precisely the periodic groups, while torsion-free abelian groups of rank 1 are necessarily subgroups of and can be completely described. More generally, a torsion-free abelian group of finite rank is a subgroup of . On the other hand, the group of -adic integers is a torsion-free abelian group of infinite -rank and the groups with different are non-isomorphic, so this invariant does not even fully capture properties of some familiar groups. The classification theorems for finitely generated, divisible, countable periodic, and rank 1 torsion-free abelian groups explained above were all obtained before 1950 and form a foundation of the classification of more general infinite abelian groups. Important technical tools used in classification of infinite abelian groups are pure and basic subgroups. Introduction of various invariants of torsion-free abelian groups has been one avenue of further progress. See the books by Irving Kaplansky, László Fuchs, Phillip Griffith, and David Arnold, as well as the proceedings of the conferences on Abelian Group Theory published in Lecture Notes in Mathematics for more recent findings. Additive groups of rings The additive group of a ring is an abelian group, but not all abelian groups are additive groups of rings (with nontrivial multiplication). Some important topics in this area of study are: Tensor product A.L.S. Corner's results on countable torsion-free groups Shelah's work to remove cardinality restrictions Burnside ring Relation to other mathematical topics Many large abelian groups possess a natural topology, which turns them into topological groups. The collection of all abelian groups, together with the homomorphisms between them, forms the category , the prototype of an abelian category. proved that the first-order theory of abelian groups, unlike its non-abelian counterpart, is decidable. Most algebraic structures other than Boolean algebras are undecidable. There are still many areas of current research: Amongst torsion-free abelian groups of finite rank, only the finitely generated case and the rank 1 case are well understood; There are many unsolved problems in the theory of infinite-rank torsion-free abelian groups; While countable torsion abelian groups are well understood through simple presentations and Ulm invariants, the case of countable mixed groups is much less mature. Many mild extensions of the first-order theory of abelian groups are known to be undecidable. Finite abelian groups remain a topic of research in computational group theory. Moreover, abelian groups of infinite order lead, quite surprisingly, to deep questions about the set theory commonly assumed to underlie all of mathematics. Take the Whitehead problem: are all Whitehead groups of infinite order also free abelian groups? In the 1970s, Saharon Shelah proved that the Whitehead problem is: Undecidable in ZFC (Zermelo–Fraenkel axioms), the conventional axiomatic set theory from which nearly all of present-day mathematics can be derived. The Whitehead problem is also the first question in ordinary mathematics proved undecidable in ZFC; Undecidable even if ZFC is augmented by taking the generalized continuum hypothesis as an axiom; Positively answered if ZFC is augmented with the axiom of constructibility (see statements true in L). A note on typography Among mathematical adjectives derived from the proper name of a mathematician, the word "abelian" is rare in that it is often spelled with a lowercase a, rather than an uppercase A, the lack of capitalization being a tacit acknowledgment not only of the degree to which Abel's name has been institutionalized but also of how ubiquitous in modern mathematics are the concepts introduced by him. See also , the smallest non-abelian group Notes References Unabridged and unaltered republication of a work first published by the Cambridge University Press, Cambridge, England, in 1978. External links Abelian group theory Properties of groups Niels Henrik Abel
Abelian group
[ "Mathematics" ]
4,235
[ "Mathematical structures", "Algebraic structures", "Properties of groups" ]
2,994
https://en.wikipedia.org/wiki/Anemometer
In meteorology, an anemometer () is a device that measures wind speed and direction. It is a common instrument used in weather stations. The earliest known description of an anemometer was by Italian architect and author Leon Battista Alberti (1404–1472) in 1450. History The anemometer has changed little since its development in the 15th century. Alberti is said to have invented it around 1450. In the ensuing centuries numerous others, including Robert Hooke (1635–1703), developed their own versions, with some mistakenly credited as its inventor. In 1846, Thomas Romney Robinson (1792–1882) improved the design by using four hemispherical cups and mechanical wheels. In 1926, Canadian meteorologist John Patterson (1872–1956) developed a three-cup anemometer, which was improved by Brevoort and Joiner in 1935. In 1991, Derek Weston added the ability to measure wind direction. In 1994, Andreas Pflitsch developed the sonic anemometer. Velocity anemometers Cup anemometers A simple type of anemometer was invented in 1845 by Rev. Dr. John Thomas Romney Robinson of Armagh Observatory. It consisted of four hemispherical cups on horizontal arms mounted on a vertical shaft. The air flow past the cups in any horizontal direction turned the shaft at a rate roughly proportional to the wind's speed. Therefore, counting the shaft's revolutions over a set time interval produced a value proportional to the average wind speed for a wide range of speeds. This type of instrument is also called a rotational anemometer. Four cup With a four-cup anemometer, the wind always has the hollow of one cup presented to it, and is blowing on the back of the opposing cup. Since a hollow hemisphere has a drag coefficient of .38 on the spherical side and 1.42 on the hollow side, more force is generated on the cup that presenting its hollow side to the wind. Because of this asymmetrical force, torque is generated on the anemometer's axis, causing it to spin. Theoretically, the anemometer's speed of rotation should be proportional to the wind speed because the force produced on an object is proportional to the speed of the gas or fluid flowing past it. However, in practice, other factors influence the rotational speed, including turbulence produced by the apparatus, increasing drag in opposition to the torque produced by the cups and support arms, and friction on the mount point. When Robinson first designed his anemometer, he asserted that the cups moved one-third of the speed of the wind, unaffected by cup size or arm length. This was apparently confirmed by some early independent experiments, but it was incorrect. Instead, the ratio of the speed of the wind and that of the cups, the anemometer factor, depends on the dimensions of the cups and arms, and can have a value between two and a little over three. Once the error was discovered, all previous experiments involving anemometers had to be repeated. Three cup The three-cup anemometer developed by Canadian John Patterson in 1926, and subsequent cup improvements by Brevoort & Joiner of the United States in 1935, led to a cupwheel design with a nearly linear response and an error of less than 3% up to . Patterson found that each cup produced maximum torque when it was at 45° to the wind flow. The three-cup anemometer also had a more constant torque and responded more quickly to gusts than the four-cup anemometer. Three cup wind direction The three-cup anemometer was further modified by Australian Dr. Derek Weston in 1991 to also measure wind direction. He added a tag to one cup, causing the cupwheel speed to increase and decrease as the tag moved alternately with and against the wind. Wind direction is calculated from these cyclical changes in speed, while wind speed is determined from the average cupwheel speed. Three-cup anemometers are currently the industry standard for wind resource assessment studies and practice. Vane anemometers One of the other forms of mechanical velocity anemometer is the vane anemometer. It may be described as a windmill or a propeller anemometer. Unlike the Robinson anemometer, whose axis of rotation is vertical, the vane anemometer must have its axis parallel to the direction of the wind and is therefore horizontal. Furthermore, since the wind varies in direction and the axis has to follow its changes, a wind vane or some other contrivance to fulfill the same purpose must be employed. A vane anemometer thus combines a propeller and a tail on the same axis to obtain accurate and precise wind speed and direction measurements from the same instrument. The speed of the fan is measured by a revolution counter and converted to a windspeed by an electronic chip. Hence, volumetric flow rate may be calculated if the cross-sectional area is known. In cases where the direction of the air motion is always the same, as in ventilating shafts of mines and buildings, wind vanes known as air meters are employed, and give satisfactory results. Hot-wire anemometers Hot wire anemometers use a fine wire (on the order of several micrometres) electrically heated to some temperature above the ambient. Air flowing past the wire cools the wire. As the electrical resistance of most metals is dependent upon the temperature of the metal (tungsten is a popular choice for hot-wires), a relationship can be obtained between the resistance of the wire and the speed of the air. In most cases, they cannot be used to measure the direction of the airflow, unless coupled with a wind vane. Several ways of implementing this exist, and hot-wire devices can be further classified as CCA (constant current anemometer), CVA (constant voltage anemometer) and CTA (constant-temperature anemometer). The voltage output from these anemometers is thus the result of some sort of circuit within the device trying to maintain the specific variable (current, voltage or temperature) constant, following Ohm's law. Additionally, PWM (pulse-width modulation) anemometers are also used, wherein the velocity is inferred by the time length of a repeating pulse of current that brings the wire up to a specified resistance and then stops until a threshold "floor" is reached, at which time the pulse is sent again. Hot-wire anemometers, while extremely delicate, have extremely high frequency-response and fine spatial resolution compared to other measurement methods, and as such are almost universally employed for the detailed study of turbulent flows, or any flow in which rapid velocity fluctuations are of interest. An industrial version of the fine-wire anemometer is the thermal flow meter, which follows the same concept, but uses two pins or strings to monitor the variation in temperature. The strings contain fine wires, but encasing the wires makes them much more durable and capable of accurately measuring air, gas, and emissions flow in pipes, ducts, and stacks. Industrial applications often contain dirt that will damage the classic hot-wire anemometer. Laser Doppler anemometers In laser Doppler velocimetry, laser Doppler anemometers use a beam of light from a laser that is divided into two beams, with one propagated out of the anemometer. Particulates (or deliberately introduced seed material) flowing along with air molecules near where the beam exits reflect, or backscatter, the light back into a detector, where it is measured relative to the original laser beam. When the particles are in great motion, they produce a Doppler shift for measuring wind speed in the laser light, which is used to calculate the speed of the particles, and therefore the air around the anemometer. Ultrasonic anemometers Ultrasonic anemometers, first developed in the 1950s, use ultrasonic sound waves to measure wind velocity. They measure wind speed based on the time of flight of sonic pulses between pairs of transducers. The time that a sonic pulse takes to travel from one transducer to its pair is inversely proportionate to the speed of sound in air plus the wind velocity in the same direction: where is the time of flight, is the distance between transducers, is the speed of sound in air and is the wind velocity. In other words, the faster the wind is blowing, the faster the sound pulse travels. To correct for the speed of sound in air (which varies according to temperature, pressure and humidity) sound pulses are sent in both directions and the wind velocity is calculated using the forward and reverse times of flight: where is the forward time of flight and the reverse. Because ultrasonic anenometers have no moving parts, they need little maintenance and can be used in harsh environments. They operate over a wide range of wind speeds. They can measure rapid changes in wind speed and direction, taking many measurements each second, and so are useful in measuring turbulent air flow patterns. Their main disadvantage is the distortion of the air flow by the structure supporting the transducers, which requires a correction based upon wind tunnel measurements to minimize the effect. Rain drops or ice on the transducers can also cause inaccuracies. Since the speed of sound varies with temperature, and is virtually stable with pressure change, ultrasonic anemometers are also used as thermometers. Measurements from pairs of transducers can be combined to yield a measurement of velocity in 1-, 2-, or 3-dimensional flow. Two-dimensional (wind speed and wind direction) sonic anemometers are used in applications such as weather stations, ship navigation, aviation, weather buoys and wind turbines. Monitoring wind turbines usually requires a refresh rate of wind speed measurements of 3 Hz, easily achieved by sonic anemometers. Three-dimensional sonic anemometers are widely used to measure gas emissions and ecosystem fluxes using the eddy covariance method when used with fast-response infrared gas analyzers or laser-based analyzers. Acoustic resonance anemometers Acoustic resonance anemometers are a more recent variant of sonic anemometer. The technology was invented by Savvas Kapartis and patented in 1999. Whereas conventional sonic anemometers rely on time of flight measurement, acoustic resonance sensors use resonating acoustic (ultrasonic) waves within a small purpose-built cavity in order to perform their measurement. Built into the cavity is an array of ultrasonic transducers, which are used to create the separate standing-wave patterns at ultrasonic frequencies. As wind passes through the cavity, a change in the wave's property occurs (phase shift). By measuring the amount of phase shift in the received signals by each transducer, and then by mathematically processing the data, the sensor is able to provide an accurate horizontal measurement of wind speed and direction. Because acoustic resonance technology enables measurement within a small cavity, the sensors tend to be typically smaller in size than other ultrasonic sensors. The small size of acoustic resonance anemometers makes them physically strong and easy to heat, and therefore resistant to icing. This combination of features means that they achieve high levels of data availability and are well suited to wind turbine control and to other uses that require small robust sensors such as battlefield meteorology. One issue with this sensor type is measurement accuracy when compared to a calibrated mechanical sensor. For many end uses, this weakness is compensated for by the sensor's longevity and the fact that it does not require recalibration once installed. Pressure anemometers The first designs of anemometers that measure the pressure were divided into plate and tube classes. Plate anemometers These are the first modern anemometers. They consist of a flat plate suspended from the top so that the wind deflects the plate. In 1450, the Italian art architect Leon Battista Alberti invented the first such mechanical anemometer; in 1663 it was re-invented by Robert Hooke. Later versions of this form consisted of a flat plate, either square or circular, which is kept normal to the wind by a wind vane. The pressure of the wind on its face is balanced by a spring. The compression of the spring determines the actual force which the wind is exerting on the plate, and this is either read off on a suitable gauge, or on a recorder. Instruments of this kind do not respond to light winds, are inaccurate for high wind readings, and are slow at responding to variable winds. Plate anemometers have been used to trigger high wind alarms on bridges. Tube anemometers James Lind's anemometer of 1775 consisted of a vertically mounted glass U tube containing a liquid manometer (pressure gauge), with one end bent out in a horizontal direction to face the wind flow and the other vertical end capped. Though the Lind was not the first, it was the most practical and best known anemometer of this type. If the wind blows into the mouth of a tube, it causes an increase of pressure on one side of the manometer. The wind over the open end of a vertical tube causes little change in pressure on the other side of the manometer. The resulting elevation difference in the two legs of the U tube is an indication of the wind speed. However, an accurate measurement requires that the wind speed be directly into the open end of the tube; small departures from the true direction of the wind causes large variations in the reading. The successful metal pressure tube anemometer of William Henry Dines in 1892 utilized the same pressure difference between the open mouth of a straight tube facing the wind and a ring of small holes in a vertical tube which is closed at the upper end. Both are mounted at the same height. The pressure differences on which the action depends are very small, and special means are required to register them. The recorder consists of a float in a sealed chamber partially filled with water. The pipe from the straight tube is connected to the top of the sealed chamber and the pipe from the small tubes is directed into the bottom inside the float. Since the pressure difference determines the vertical position of the float this is a measure of the wind speed. The great advantage of the tube anemometer lies in the fact that the exposed part can be mounted on a high pole, and requires no oiling or attention for years; and the registering part can be placed in any convenient position. Two connecting tubes are required. It might appear at first sight as though one connection would serve, but the differences in pressure on which these instruments depend are so minute, that the pressure of the air in the room where the recording part is placed has to be considered. Thus, if the instrument depends on the pressure or suction effect alone, and this pressure or suction is measured against the air pressure in an ordinary room in which the doors and windows are carefully closed and a newspaper is then burnt up the chimney, an effect may be produced equal to a wind of 10 mi/h (16 km/h); and the opening of a window in rough weather, or the opening of a door, may entirely alter the registration. While the Dines anemometer had an error of only 1% at , it did not respond very well to low winds due to the poor response of the flat plate vane required to turn the head into the wind. In 1918 an aerodynamic vane with eight times the torque of the flat plate overcame this problem. Pitot tube static anemometers Modern tube anemometers use the same principle as in the Dines anemometer, but using a different design. The implementation uses a pitot-static tube, which is a pitot tube with two ports, pitot and static, that is normally used in measuring the airspeed of aircraft. The pitot port measures the dynamic pressure of the open mouth of a tube with pointed head facing the wind, and the static port measures the static pressure from small holes along the side on that tube. The pitot tube is connected to a tail so that it always makes the tube's head face the wind. Additionally, the tube is heated to prevent rime ice formation on the tube. There are two lines from the tube down to the devices to measure the difference in pressure of the two lines. The measurement devices can be manometers, pressure transducers, or analog chart recorders. Ping-pong ball anemometers A common anemometer for basic use is constructed from a ping-pong ball attached to a string. When the wind blows horizontally, it presses on and moves the ball; because ping-pong balls are very lightweight, they move easily in light winds. Measuring the angle between the string-ball apparatus and the vertical gives an estimate of the wind speed. This type of anemometer is mostly used for middle-school level instruction, which most students make on their own, but a similar device was also flown on the Phoenix Mars Lander. Effect of density on measurements In the tube anemometer the dynamic pressure is actually being measured, although the scale is usually graduated as a velocity scale. If the actual air density differs from the calibration value, due to differing temperature, elevation or barometric pressure, a correction is required to obtain the actual wind speed. Approximately 1.5% (1.6% above 6,000 feet) should be added to the velocity recorded by a tube anemometer for each 1000 ft (5% for each kilometer) above sea-level. Effect of icing At airports, it is essential to have accurate wind data under all conditions, including freezing precipitation. Anemometry is also required in monitoring and controlling the operation of wind turbines, which in cold environments are prone to in-cloud icing. Icing alters the aerodynamics of an anemometer and may entirely block it from operating. Therefore, anemometers used in these applications must be internally heated. Both cup anemometers and sonic anemometers are presently available with heated versions. Instrument location In order for wind speeds to be comparable from location to location, the effect of the terrain needs to be considered, especially in regard to height. Other considerations are the presence of trees, and both natural canyons and artificial canyons (urban buildings). The standard anemometer height in open rural terrain is 10 meters. See also Air flow meter Anemoi, for the ancient origin of the name of this technology Anemoscope, ancient device for measuring or predicting wind direction or weather Automated airport weather station Night of the Big Wind Particle image velocimetry Savonius wind turbine Wind power forecasting Wind run Windsock, a simple high-visibility indicator of approximate wind speed and direction Notes References Meteorological Instruments, W.E. Knowles Middleton and Athelstan F. Spilhaus, Third Edition revised, University of Toronto Press, Toronto, 1953 Invention of the Meteorological Instruments, W. E. Knowles Middleton, The Johns Hopkins Press, Baltimore, 1969 External links Description of the development and the construction of an ultrasonic anemometer Animation Showing Sonic Principle of Operation (Time of Flight Theory) – Gill Instruments Collection of historical anemometer Principle of Operation: Acoustic Resonance measurement – FT Technologies Thermopedia, "Anemometers (laser doppler)" Thermopedia, "Anemometers (pulsed thermal)" Thermopedia, "Anemometers (vane)" The Rotorvane Anemometer. Measuring both wind speed and direction using a tagged three-cup sensor Italian inventions Measuring instruments Meteorological instrumentation and equipment Navigational equipment Wind power 15th-century inventions
Anemometer
[ "Technology", "Engineering" ]
4,043
[ "Meteorological instrumentation and equipment", "Measuring instruments" ]
3,022
https://en.wikipedia.org/wiki/Autonomous%20building
An autonomous building is a building designed to be operated independently from infrastructural support services such as the electric power grid, gas grid, municipal water systems, sewage treatment systems, storm drains, communication services, and in some cases, public roads. The literature mostly refers to housing, or the autonomous house. Advocates of autonomous building describe advantages that include reduced environmental impacts, increased security, and lower costs of ownership. Some cited advantages satisfy tenets of green building, not independence per se (see below). Off-grid buildings often rely very little on civil services and are therefore safer and more comfortable during civil disaster or military attacks. For example, off-grid buildings would not lose power or water if public supplies were compromised. History 1970s In the 1970s, groups of activists and engineers were inspired by the warnings of imminent resource depletion and starvation. In the United States, a group calling themselves the New Alchemists were famous for the depth of research effort placed in their projects. Using conventional construction techniques, they designed a series of "bioshelter" projects, the most famous of which was The Ark bioshelter community for Prince Edward Island. They published the plans for all of these, with detailed design calculations and blueprints. The Ark used wind-based water pumping and electricity and was self-contained in food production. It had living quarters for people, fish tanks raising tilapia for protein, a greenhouse watered with fish water, and a closed-loop sewage reclamation system that recycled human waste into sanitized fertilizer for the fish tanks. As of January 2010, the successor organization to the New Alchemists has a web page up as the "New Alchemy Institute". The PEI Ark has been abandoned and partially renovated several times. 1990s The 1990s saw the development of Earthships, similar in intent to the Ark project, but organised as a for-profit venture, with construction details published in a series of three books by American architect Mike Reynolds. The building material is tires filled with earth. This makes a wall that has large amounts of thermal mass (see earth sheltering). Berms are placed on exposed surfaces to further increase the house's temperature stability. The water system starts with rain water, processed for drinking, then washing, then plant watering, then toilet flushing, and finally black water is recycled again for more plant watering. The cisterns are placed and used as thermal masses. Power, including electricity, heat and water heating, is from solar power. Some 1990s architects such as William McDonough and Ken Yeang applied environmentally responsible building design to large commercial buildings, such as office buildings, making them largely self-sufficient in energy production. One major bank building (ING Group's Amsterdam headquarters) in the Netherlands was constructed to be autonomous and artistic as well. 2000s In 2002, British architects Brenda and Robert Vale wrote:It is quite possible in all parts of Australia to construct a 'house with no bills', which would be comfortable without heating and cooling, which would make its own electricity, collect its own water and deal with its own waste...These houses can be built now, using off-the-shelf techniques. It is possible to build a "house with no bills" for the same price as a conventional house, but it would be (25%) smaller. Advantages As an architect or engineer becomes more concerned with the disadvantages of transportation networks, and dependence on distant resources, their designs tend to include more autonomous elements. The historic path to autonomy was a concern for secure sources of heat, power, water and food. A nearly parallel path toward autonomy has been to start with a concern for environmental impacts, which cause disadvantages. Autonomous buildings can increase security and reduce environmental impacts by using on-site resources (such as sunlight and rain) that would otherwise be wasted. Autonomy often dramatically reduces the costs and impacts of networks that serve the building, because autonomy short-circuits the multiplying inefficiencies of collecting and transporting resources. Other impacted resources, such as oil reserves and the retention of the local watershed, can often be cheaply conserved by thoughtful designs. Autonomous buildings are usually energy-efficient in operation, and therefore cost-efficient, for the obvious reason that smaller energy needs are easier to satisfy off-grid. But they may substitute energy production or other techniques to avoid diminishing returns in extreme conservation. An autonomous structure is not always environmentally friendly. The goal of independence from support systems is associated with, but not identical to, other goals of environmentally responsible green building. However, autonomous buildings also usually include some degree of sustainability through the use of renewable energy and other renewable resources, producing no more greenhouse gases than they consume, and other measures. Disadvantages First and fundamentally, independence is a matter of degree. For example, eliminating dependence on the electrical grid is relatively easy. In contrast, running an efficient, reliable food source can be a chore. Living within an autonomous shelter may also require sacrifices in lifestyle or social opportunities. Even the most comfortable and technologically advanced autonomous homes could require alterations of residents' behavior. Some may not welcome the extra chores. The Vails described some clients' experiences as inconvenient, irritating, isolating, or even as an unwanted full-time job. A well-designed building can reduce this issue, but usually at the expense of reduced autonomy. An autonomous house must be custom-built (or extensively retrofitted) to suit the climate and location. Passive solar techniques, alternative toilet and sewage systems, thermal massing designs, basement battery systems, efficient windowing, and the array of other design tactics require some degree of non-standard construction, added expense, ongoing experimentation and maintenance, and also have an effect on the psychology of the space. Systems Water There are many methods of collecting and conserving water. Use reduction is cost-effective. Greywater systems reuse drained wash water to flush toilets or to water lawns and gardens. Greywater systems can halve the water use of most residential buildings; however, they require the purchase of a sump, greywater pressurization pump, and secondary plumbing. Some builders are installing waterless urinals and even composting toilets that eliminate water usage in sewage disposal. The classic solution with minimal life-style changes is using a well. Once drilled, a well-foot requires substantial power. However, advanced well-foots can reduce power usage by twofold or more from older models. Well water can be contaminated in some areas. The Sono arsenic filter eliminates unhealthy arsenic in well water. However drilling a well is an uncertain activity, with aquifers depleted in some areas. It can also be expensive. In regions with sufficient rainfall, it is often more economical to design a building to use rainwater harvesting, with supplementary water deliveries in a drought. Rain water makes excellent soft washwater, but needs antibacterial treatment. If used for drinking, mineral supplements or mineralization is necessary. Most desert and temperate climates get at least of rain per year. This means that a typical one-story house with a greywater system can supply its year-round water needs from its roof alone. In the driest areas, it might require a cistern of . Many areas average of rain per week, and these can use a cistern as small as . In many areas, it is difficult to keep a roof clean enough for drinking. To reduce dirt and bad tastes, systems use a metal collecting-roof and a "roof cleaner" tank that diverts the first 40 liters. Cistern water is usually chlorinated, though reverse osmosis systems provide even better quality drinking water. In the classic Roman house ("Domus"), household water was provided from a cistern (the "impluvium"), which was a decorative feature of the atrium, the house's main public space. It was fed by downspout tiles from the inward-facing roof-opening (the "compluvium"). Often water lilies were grown in it to purify the water. Wealthy households often supplemented the rain with a small fountain fed from a city's cistern. The impluvium always had an overflow drain so it could not flood the house. Modern cisterns are usually large plastic tanks. Gravity tanks on short towers are reliable, so pump repairs are less urgent. The least expensive bulk cistern is a fenced pond or pool at ground level. Reducing autonomy reduces the size and expense of cisterns. Many autonomous homes can reduce water use below per person per day, so that in a drought a month of water can be delivered inexpensively via truck. Self-delivery is often possible by installing fabric water tanks that fit the bed of a pick-up truck. It can be convenient to use the cistern as a heat sink or trap for a heat pump or air conditioning system; however this can make cold drinking water warm, and in drier years may decrease the efficiency of the HVAC system. Solar stills can efficiently produce drinking water from ditch water or cistern water, especially high-efficiency multiple effect humidification designs, which separate the evaporator(s) and condenser(s). New technologies, like reverse osmosis can create unlimited amounts of pure water from polluted water, ocean water, and even from humid air. Watermakers are available for yachts that convert seawater and electricity into potable water and brine. Atmospheric water generators extract moisture from dry desert air and filter it to pure water. Sewage Resource Composting toilets use bacteria to decompose human feces into useful, odourless, sanitary compost. The process is sanitary because soil bacteria eat the human pathogens as well as most of the mass of the waste. Nevertheless, most health authorities forbid direct use of "humanure" for growing food. The risk is microbial and viral contamination, as well as heavy metal toxicity. In a dry composting toilet, the waste is evaporated or digested to gas (mostly carbon dioxide) and vented, so a toilet produces only a few pounds of compost every six months. To control the odor, modern toilets use a small fan to keep the toilet under negative pressure, and exhaust the gasses to a vent pipe. Some home sewage treatment systems use biological treatment, usually beds of plants and aquaria, that absorb nutrients and bacteria and convert greywater and sewage to clear water. This odor- and color-free reclaimed water can be used to flush toilets and water outside plants. When tested, it approaches standards for potable water. In climates that freeze, the plants and aquaria need to be kept in a small greenhouse space. Good systems need about as much care as a large aquarium. Electric incinerating toilets turn excrement into a small amount of ash. They are cool to the touch, have no water and no pipes, and require an air vent in a wall. They are used in remote areas where use of septic tanks is limited, usually to reduce nutrient loads in lakes. NASA's bioreactor is an extremely advanced biological sewage system. It can turn sewage into air and water through microbial action. NASA plans to use it in the crewed Mars mission. Another method is NASA's urine-to-water distillation system. A big disadvantage of complex biological sewage treatment systems is that if the house is empty, the sewage system biota may starve to death. Waste Sewage handling is essential for public health. Many diseases are transmitted by poorly functioning sewage systems. The standard system is a tiled leach field combined with a septic tank. The basic idea is to provide a small system with primary sewage treatment. Sludge settles to the bottom of the septic tank, is partially reduced by anaerobic digestion, and fluid is dispersed in the leach field. The leach field is usually under a yard growing grass. Septic tanks can operate entirely by gravity, and if well managed, are reasonably safe. Septic tanks have to be pumped periodically by a vacuum truck to eliminate non reducing solids. Failure to pump a septic tank can cause overflow that damages the leach field, and contaminates ground water. Septic tanks may also require some lifestyle changes, such as not using garbage disposals, minimizing fluids flushed into the tank, and minimizing non-digestible solids flushed into the tank. For example, septic safe toilet paper is recommended. However, septic tanks remain popular because they permit standard plumbing fixtures, and require few or no lifestyle sacrifices. Composting or packaging toilets make it economical and sanitary to throw away sewage as part of the normal garbage collection service. They also reduce water use by half, and eliminate the difficulty and expense of septic tanks. However, they require the local landfill to use sanitary practices. Incinerator systems are quite practical. The ashes are biologically safe, and less than 1/10 the volume of the original waste, but like all incinerator waste, are usually classified as hazardous waste. Traditional methods of sewage handling include pit toilets, latrines, and outhouses. These can be safe, inexpensive and practical. They are still used in many regions. Storm drains Drainage systems are a crucial compromise between human habitability and a secure, sustainable watershed. Paved areas and lawns or turf do not allow much precipitation to filter through the ground to recharge aquifers. They can cause flooding and damage in neighbourhoods, as the water flows over the surface towards a low point. Typically, elaborate, capital-intensive storm sewer networks are engineered to deal with stormwater. In some cities, such as the Victorian era London sewers or much of the old City of Toronto, the storm water system is combined with the sanitary sewer system. In the event of heavy precipitation, the load on the sewage treatment plant at the end of the pipe becomes too great to handle and raw sewage is dumped into holding tanks, and sometimes into surface water. Autonomous buildings can address precipitation in a number of ways. If a water-absorbing swale for each yard is combined with permeable concrete streets, storm drains can be omitted from the neighbourhood. This can save more than $800 per house (1970s) by eliminating storm drains. One way to use the savings is to purchase larger lots, which permits more amenities at the same cost. Permeable concrete is an established product in warm climates, and in development for freezing climates. In freezing climates, the elimination of storm drains can often still pay for enough land to construct swales (shallow water collecting ditches) or water impeding berms instead. This plan provides more land for homeowners and can offer more interesting topography for landscaping. Additionally, a green roof captures precipitation and uses the water to grow plants. It can be built into a new building or used to replace an existing roof. Electricity Since electricity is an expensive utility, the first step towards autonomy is to design a house and lifestyle to reduce demand. LED lights, laptop computers and gas-powered refrigerators save electricity, although gas-powered refrigerators are not very efficient. There are also superefficient electric refrigerators, such as those produced by the Sun Frost company, some of which use only about half as much electricity as a mass-market energy star-rated refrigerator. Using a solar roof, solar cells can provide electric power. Solar roofs can be more cost-effective than retrofitted solar power, because buildings need roofs anyway. Modern solar cells last about 40 years, which makes them a reasonable investment in some areas. At a sufficient angle, solar cells are cleaned by run-off rain water and therefore have almost no life-style impact. Many areas have long winter nights or dark cloudy days. In these climates, a solar installation might not pay for itself or large battery storage systems are necessary to achieve electric self-sufficiency. In stormy or windy climates, wind turbines can replace or significantly supplement solar power. The average autonomous house needs only one small wind turbine, 5 metres or less in diameter. On a 30-metre (100-foot) tower, this turbine can provide enough power to supplement solar power on cloudy days. Commercially available wind turbines use sealed, one-moving-part AC generators and passive, self-feathering blades for years of operation without service. The main advantage of wind power is that larger wind turbines have a lower per-watt cost than solar cells, provided there is wind. Turbine location is critical: just as some locations lack sun for solar cells, many areas lack enough wind to make a turbine pay for itself. In the Great Plains of the United States, a 10-metre (33-foot) turbine can supply enough energy to heat and cool a well-built all-electric house. Economic use in other areas requires research, and possibly a site survey. Some sites have access to a stream with a change in elevation. These sites can use small hydropower systems to generate electricity. If the difference in elevation is above 30 metres (100 feet), and the stream runs in all seasons, this can provide continuous power with a small, inexpensive installation. Lower changes of elevation require larger installations or dams, and can be less efficient. Clogging at the turbine intake can be a practical problem. The usual solution is a small pool and waterfall (a penstock) to carry away floating debris. Another solution is to utilize a turbine that resists debris, such as a Gorlov helical turbine or Ossberger turbine. During times of low demand, excess power can be stored in batteries for future use. However, batteries need to be replaced every few years. In many areas, battery expenses can be eliminated by attaching the building to the electric power grid and operating the power system with net metering. Utility permission is required, but such cooperative generation is legally mandated in some areas (for example, California). A grid-based building is less autonomous, but more economical and sustainable with fewer lifestyle sacrifices. In rural areas the grid's cost and impacts can be reduced by using single-wire earth return systems (for example, the MALT-system). In areas that lack access to the grid, battery size can be reduced with a generator to recharge the batteries during energy droughts such as extended fogs. Auxiliary generators are usually run from propane, natural gas, or sometimes diesel. An hour of charging usually provides a day of operation. Modern residential chargers permit the user to set the charging times, so the generator is quiet at night. Some generators automatically test themselves once per week. Recent advances in passively stable magnetic bearings may someday permit inexpensive storage of power in a flywheel in a vacuum. Research groups like Canada's Ballard Power Systems are also working to develop a "regenerative fuel cell", a device that can generate hydrogen and oxygen when power is available, and combine these efficiently when power is needed. Earth batteries tap electric currents in the earth called telluric current. They can be installed anywhere in the ground. They provide only low voltages and current. They were used to power telegraphs in the 19th century. As appliance efficiencies increase, they may become practical. Microbial fuel cells and thermoelectric generators allow electricity to be generated from biomass. The plant can be dried, chopped and converted or burned as a whole, or it can be left alive so that waste saps from the plant can be converted by bacteria. Heating Most autonomous buildings are designed to use insulation, thermal mass and passive solar heating and cooling. Examples of these are trombe walls and other technologies as skylights. Passive solar heating can heat most buildings in even the mild and chilly climates. In colder climates, extra construction costs can be as little as 15% more than new, conventional buildings. In warm climates, those having less than two weeks of frosty nights per year, there is no cost impact. The basic requirement for passive solar heating is that the solar collectors must face the prevailing sunlight (south in the Northern Hemisphere, north in the Southern Hemisphere), and the building must incorporate thermal mass to keep it warm in the night. A recent, somewhat experimental solar heating system "Annualized geo solar heating" is practical even in regions that get little or no sunlight in winter. It uses the ground beneath a building for thermal mass. Precipitation can carry away the heat, so the ground is shielded with skirts of plastic insulation. The thermal mass of this system is sufficiently inexpensive and large that it can store enough summer heat to warm a building for the whole winter, and enough winter cold to cool the building in summer. In annualized geo solar systems, the solar collector is often separate from (and hotter or colder than) the living space. The building may actually be constructed from insulation, for example, straw-bale construction. Some buildings have been aerodynamically designed so that convection via ducts and interior spaces eliminates any need for electric fans. A more modest "daily solar" design is practical. For example, for about a 15% premium in building costs, the Passivhaus building codes in Europe use high performance insulating windows, R-30 insulation, HRV ventilation, and a small thermal mass. With modest changes in the building's position, modern krypton- or argon-insulated windows permit normal-looking windows to provide passive solar heat without compromising insulation or structural strength. If a small heater is available for the coldest nights, a slab or basement cistern can inexpensively provide the required thermal mass. Passivhaus building codes, in particular, bring unusually good interior air quality, because the buildings change the air several times per hour, passing it through a heat exchanger to keep heat inside. In all systems, a small supplementary heater increases personal security and reduces lifestyle impacts for a small reduction of autonomy. The two most popular heaters for ultra-high-efficiency houses are a small heat pump, which also provides air conditioning, or a central hydronic (radiator) air heater with water recirculating from the water heater. Passivhaus designs usually integrate the heater with the ventilation system. Earth sheltering and windbreaks can also reduce the absolute amount of heat needed by a building. Several feet below the earth, temperature ranges from in North Dakota to , in Southern Florida. Wind breaks reduce the amount of heat carried away from a building. Rounded, aerodynamic buildings also lose less heat. An increasing number of commercial buildings use a combined cycle with cogeneration to provide heating, often water heating, from the output of a natural gas reciprocating engine, gas turbine or stirling electric generator. Houses designed to cope with interruptions in civil services generally incorporate a wood stove, or heat and power from diesel fuel or bottled gas, regardless of their other heating mechanisms. Electric heaters and electric stoves may provide pollution-free heat (depending on the power source), but use large amounts of electricity. If enough electricity is provided by solar panels, wind turbines, or other means, then electric heaters and stoves become a practical autonomous design. Water heating Hot water heat recycling units recover heat from water drain lines. They increase a building's autonomy by decreasing the heat or fuel used to heat water. They are attractive because they have no lifestyle changes. Current practical, comfortable domestic water-heating systems combine a solar preheating system with a thermostatic gas-powered flow-through heater, so that the temperature of the water is consistent, and the amount is unlimited. This reduces life-style impacts at some cost in autonomy. Solar water heaters can save large amounts of fuel. Also, small changes in lifestyle, such as doing laundry, dishes and bathing on sunny days, can greatly increase their efficiency. Pure solar heaters are especially useful for laundries, swimming pools and external baths, because these can be scheduled for use on sunny days. The basic trick in a solar water heating system is to use a well-insulated holding tank. Some systems are vacuum- insulated, acting something like large thermos bottles. The tank is filled with hot water on sunny days, and made available at all times. Unlike a conventional tank water heater, the tank is filled only when there is sunlight. Good storage makes a smaller, higher-technology collector feasible. Such collectors can use relatively exotic technologies, such as vacuum insulation, and reflective concentration of sunlight. Cogeneration systems produce hot water from waste heat. They usually get the heat from the exhaust of a generator or fuel cell. Heat recycling, cogeneration and solar pre-heating can save 50–75% of the gas otherwise used. Also, some combinations provide redundant reliability by having several sources of heat. Some authorities advocate replacing bottled gas or natural gas with biogas. However, this is usually impractical unless live-stock are on-site. The wastes of a single family are usually insufficient to produce enough methane for anything more than small amounts of cooking. Cooling Annualized geo solar buildings often have buried, sloped water-tight skirts of insulation that extend from the foundations, to prevent heat leakage between the earth used as thermal mass, and the surface. Less dramatic improvements are possible. Windows can be shaded in summer. Eaves can be overhung to provide the necessary shade. These also shade the walls of the house, reducing cooling costs. Another trick is to cool the building's thermal mass at night, perhaps with a whole-house fan and then cool the building from the thermal mass during the day. It helps to be able to route cold air from a sky-facing radiator (perhaps an air heating solar collector with an alternate purpose) or evaporative cooler directly through the thermal mass. On clear nights, even in tropical areas, sky-facing radiators can cool below freezing. If a circular building is aerodynamically smooth, and cooler than the ground, it can be passively cooled by the "dome effect." Many installations have reported that a reflective or light-colored dome induces a local vertical heat-driven vortex that sucks cooler overhead air downward into a dome if the dome is vented properly (a single overhead vent, and peripheral vents). Some people have reported a temperature differential as high as () between the inside of the dome and the outside. Buckminster Fuller discovered this effect with a simple house design adapted from a grain silo, and adapted his Dymaxion house and geodesic domes to use it. Refrigerators and air conditioners operating from the waste heat of a diesel engine exhaust, heater flue or solar collector are entering use. These use the same principles as a gas refrigerator. Normally, the heat from a flue powers an "absorptive chiller". The cold water or brine from the chiller is used to cool air or a refrigerated space. Cogeneration is popular in new commercial buildings. In current cogeneration systems small gas turbines or stirling engines powered from natural gas produce electricity and their exhaust drives an absorptive chiller. A truck trailer refrigerator operating from the waste heat of a tractor's diesel exhaust was demonstrated by NRG Solutions, Inc. NRG developed a hydronic ammonia gas heat exchanger and vaporizer, the two essential new, not commercially available components of a waste heat driven refrigerator. A similar scheme (multiphase cooling) can be by a multistage evaporative cooler. The air is passed through a spray of salt solution to dehumidify it, then through a spray of water solution to cool it, then another salt solution to dehumidify it again. The brine has to be regenerated, and that can be done economically with a low-temperature solar still. Multiphase evaporative coolers can lower the air's temperature by 50 °F (28 °C), and still control humidity. If the brine regenerator uses high heat, it also partially sterilises to the air. If enough electric power is available, cooling can be provided by conventional air conditioning using a heat pump. Food production Food production has often been included in historic autonomous projects to provide security. Skilled, intensive gardening can support an adult from as little as 100 square meters of land per person, possibly requiring the use of organic farming and aeroponics. Some proven intensive, low-effort food-production systems include urban gardening (indoors and outdoors). Indoor cultivation may be set up using hydroponics, while outdoor cultivation may be done using permaculture, forest gardening, no-till farming, and do nothing farming. Greenhouses are also sometimes included. Sometimes they are also outfitted with irrigation systems or heat sink systems which can respectively irrigate the plants or help to store energy from the sun and redistribute it at night (when the greenhouses starts to cool down). See also Notes External links Buckminster Fuller Institute GreenSpec, a UK resources site which endorses green building products, systems, and services Brenda and Robert Vale. Sustainable development begins at home. March 15, 2002. The Cropthorne House (December 28, 2009) Building engineering Human habitats Low-energy building Sustainable architecture Sustainable building Self-sustainability Building Buildings and structures
Autonomous building
[ "Engineering", "Environmental_science" ]
5,982
[ "Sustainable building", "Sustainable architecture", "Building", "Building engineering", "Construction", "Civil engineering", "Buildings and structures", "Environmental social science", "Architecture" ]
3,038
https://en.wikipedia.org/wiki/Acid%E2%80%93base%20reaction
In chemistry, an acid–base reaction is a chemical reaction that occurs between an acid and a base. It can be used to determine pH via titration. Several theoretical frameworks provide alternative conceptions of the reaction mechanisms and their application in solving related problems; these are called the acid–base theories, for example, Brønsted–Lowry acid–base theory. Their importance becomes apparent in analyzing acid–base reactions for gaseous or liquid species, or when acid or base character may be somewhat less apparent. The first of these concepts was provided by the French chemist Antoine Lavoisier, around 1776. It is important to think of the acid–base reaction models as theories that complement each other. For example, the current Lewis model has the broadest definition of what an acid and base are, with the Brønsted–Lowry theory being a subset of what acids and bases are, and the Arrhenius theory being the most restrictive. Acid–base definitions Historic development The concept of an acid–base reaction was first proposed in 1754 by Guillaume-François Rouelle, who introduced the word "base" into chemistry to mean a substance which reacts with an acid to give it solid form (as a salt). Bases are mostly bitter in nature. Lavoisier's oxygen theory of acids The first scientific concept of acids and bases was provided by Lavoisier in around 1776. Since Lavoisier's knowledge of strong acids was mainly restricted to oxoacids, such as (nitric acid) and (sulfuric acid), which tend to contain central atoms in high oxidation states surrounded by oxygen, and since he was not aware of the true composition of the hydrohalic acids (HF, HCl, HBr, and HI), he defined acids in terms of their containing oxygen, which in fact he named from Greek words meaning "acid-former" (). The Lavoisier definition held for over 30 years, until the 1810 article and subsequent lectures by Sir Humphry Davy in which he proved the lack of oxygen in hydrogen sulfide (), hydrogen telluride (), and the hydrohalic acids. However, Davy failed to develop a new theory, concluding that "acidity does not depend upon any particular elementary substance, but upon peculiar arrangement of various substances". One notable modification of oxygen theory was provided by Jöns Jacob Berzelius, who stated that acids are oxides of nonmetals while bases are oxides of metals. Liebig's hydrogen theory of acids In 1838, Justus von Liebig proposed that an acid is a hydrogen-containing compound whose hydrogen can be replaced by a metal. This redefinition was based on his extensive work on the chemical composition of organic acids, finishing the doctrinal shift from oxygen-based acids to hydrogen-based acids started by Davy. Liebig's definition, while completely empirical, remained in use for almost 50 years until the adoption of the Arrhenius definition. Arrhenius definition The first modern definition of acids and bases in molecular terms was devised by Svante Arrhenius. A hydrogen theory of acids, it followed from his 1884 work with Friedrich Wilhelm Ostwald in establishing the presence of ions in aqueous solution and led to Arrhenius receiving the Nobel Prize in Chemistry in 1903. As defined by Arrhenius: An Arrhenius acid is a substance that ionises in water to form hydrogen ions (); that is, an acid increases the concentration of H+ ions in an aqueous solution. This causes the protonation of water, or the creation of the hydronium () ion. Thus, in modern times, the symbol is interpreted as a shorthand for , because it is now known that a bare proton does not exist as a free species in aqueous solution. This is the species which is measured by pH indicators to measure the acidity or basicity of a solution. An Arrhenius base is a substance that dissociates in water to form hydroxide () ions; that is, a base increases the concentration of ions in an aqueous solution. The Arrhenius definitions of acidity and alkalinity are restricted to aqueous solutions and are not valid for most non-aqueous solutions, and refer to the concentration of the solvent ions. Under this definition, pure and HCl dissolved in toluene are not acidic, and molten NaOH and solutions of calcium amide in liquid ammonia are not alkaline. This led to the development of the Brønsted–Lowry theory and subsequent Lewis theory to account for these non-aqueous exceptions. The reaction of an acid with a base is called a neutralization reaction. The products of this reaction are a salt and water. In this traditional representation an acid–base neutralization reaction is formulated as a double-replacement reaction. For example, the reaction of hydrochloric acid (HCl) with sodium hydroxide (NaOH) solutions produces a solution of sodium chloride (NaCl) and some additional water molecules. The modifier (aq) in this equation was implied by Arrhenius, rather than included explicitly. It indicates that the substances are dissolved in water. Though all three substances, HCl, NaOH and NaCl are capable of existing as pure compounds, in aqueous solutions they are fully dissociated into the aquated ions and . Example: Baking powder Baking powder is used to cause the dough for breads and cakes to "rise" by creating millions of tiny carbon dioxide bubbles. Baking powder is not to be confused with baking soda, which is sodium bicarbonate (). Baking powder is a mixture of baking soda (sodium bicarbonate) and acidic salts. The bubbles are created because, when the baking powder is combined with water, the sodium bicarbonate and acid salts react to produce gaseous carbon dioxide. Whether commercially or domestically prepared, the principles behind baking powder formulations remain the same. The acid–base reaction can be generically represented as shown: The real reactions are more complicated because the acids are complicated. For example, starting with sodium bicarbonate and monocalcium phosphate (), the reaction produces carbon dioxide by the following stoichiometry: A typical formulation (by weight) could call for 30% sodium bicarbonate, 5–12% monocalcium phosphate, and 21–26% sodium aluminium sulfate. Alternately, a commercial baking powder might use sodium acid pyrophosphate as one of the two acidic components instead of sodium aluminium sulfate. Another typical acid in such formulations is cream of tartar (), a derivative of tartaric acid. Brønsted–Lowry definition The Brønsted–Lowry definition, formulated in 1923, independently by Johannes Nicolaus Brønsted in Denmark and Martin Lowry in England, is based upon the idea of protonation of bases through the deprotonation of acids – that is, the ability of acids to "donate" hydrogen ions () otherwise known as protons to bases, which "accept" them. An acid–base reaction is, thus, the removal of a hydrogen ion from the acid and its addition to the base. The removal of a hydrogen ion from an acid produces its conjugate base, which is the acid with a hydrogen ion removed. The reception of a proton by a base produces its conjugate acid, which is the base with a hydrogen ion added. Unlike the previous definitions, the Brønsted–Lowry definition does not refer to the formation of salt and solvent, but instead to the formation of conjugate acids and conjugate bases, produced by the transfer of a proton from the acid to the base. In this approach, acids and bases are fundamentally different in behavior from salts, which are seen as electrolytes, subject to the theories of Debye, Onsager, and others. An acid and a base react not to produce a salt and a solvent, but to form a new acid and a new base. The concept of neutralization is thus absent. Brønsted–Lowry acid–base behavior is formally independent of any solvent, making it more all-encompassing than the Arrhenius model. The calculation of pH under the Arrhenius model depended on alkalis (bases) dissolving in water (aqueous solution). The Brønsted–Lowry model expanded what could be pH tested using insoluble and soluble solutions (gas, liquid, solid). The general formula for acid–base reactions according to the Brønsted–Lowry definition is: where HA represents the acid, B represents the base, represents the conjugate acid of B, and represents the conjugate base of HA. For example, a Brønsted–Lowry model for the dissociation of hydrochloric acid (HCl) in aqueous solution would be the following: The removal of from the produces the chloride ion, , the conjugate base of the acid. The addition of to the (acting as a base) forms the hydronium ion, , the conjugate acid of the base. Water is amphoteric that is, it can act as both an acid and a base. The Brønsted–Lowry model explains this, showing the dissociation of water into low concentrations of hydronium and hydroxide ions: This equation is demonstrated in the image below: Here, one molecule of water acts as an acid, donating an and forming the conjugate base, , and a second molecule of water acts as a base, accepting the ion and forming the conjugate acid, . As an example of water acting as an acid, consider an aqueous solution of pyridine, . In this example, a water molecule is split into a hydrogen ion, which is donated to a pyridine molecule, and a hydroxide ion. In the Brønsted–Lowry model, the solvent does not necessarily have to be water, as is required by the Arrhenius Acid–Base model. For example, consider what happens when acetic acid, , dissolves in liquid ammonia. An ion is removed from acetic acid, forming its conjugate base, the acetate ion, . The addition of an ion to an ammonia molecule of the solvent creates its conjugate acid, the ammonium ion, . The Brønsted–Lowry model calls hydrogen-containing substances (like ) acids. Thus, some substances, which many chemists considered to be acids, such as or , are excluded from this classification due to lack of hydrogen. Gilbert N. Lewis wrote in 1938, "To restrict the group of acids to those substances that contain hydrogen interferes as seriously with the systematic understanding of chemistry as would the restriction of the term oxidizing agent to substances containing oxygen." Furthermore, and are not considered Brønsted bases, but rather salts containing the bases and . Lewis definition The hydrogen requirement of Arrhenius and Brønsted–Lowry was removed by the Lewis definition of acid–base reactions, devised by Gilbert N. Lewis in 1923, in the same year as Brønsted–Lowry, but it was not elaborated by him until 1938. Instead of defining acid–base reactions in terms of protons or other bonded substances, the Lewis definition defines a base (referred to as a Lewis base) to be a compound that can donate an electron pair, and an acid (a Lewis acid) to be a compound that can receive this electron pair. For example, boron trifluoride, is a typical Lewis acid. It can accept a pair of electrons as it has a vacancy in its octet. The fluoride ion has a full octet and can donate a pair of electrons. Thus is a typical Lewis acid, Lewis base reaction. All compounds of group 13 elements with a formula can behave as Lewis acids. Similarly, compounds of group 15 elements with a formula , such as amines, , and phosphines, , can behave as Lewis bases. Adducts between them have the formula with a dative covalent bond, shown symbolically as ←, between the atoms A (acceptor) and D (donor). Compounds of group 16 with a formula may also act as Lewis bases; in this way, a compound like an ether, , or a thioether, , can act as a Lewis base. The Lewis definition is not limited to these examples. For instance, carbon monoxide acts as a Lewis base when it forms an adduct with boron trifluoride, of formula . Adducts involving metal ions are referred to as co-ordination compounds; each ligand donates a pair of electrons to the metal ion. The reaction can be seen as an acid–base reaction in which a stronger base (ammonia) replaces a weaker one (water). The Lewis and Brønsted–Lowry definitions are consistent with each other since the reaction is an acid–base reaction in both theories. Solvent system definition One of the limitations of the Arrhenius definition is its reliance on water solutions. Edward Curtis Franklin studied the acid–base reactions in liquid ammonia in 1905 and pointed out the similarities to the water-based Arrhenius theory. Albert F.O. Germann, working with liquid phosgene, , formulated the solvent-based theory in 1925, thereby generalizing the Arrhenius definition to cover aprotic solvents. Germann pointed out that in many solutions, there are ions in equilibrium with the neutral solvent molecules: solvonium ions: a generic name for positive ions. These are also sometimes called solvo-acids; when protonated solvent, they are lyonium ions. solvate ions: a generic name for negative ions. These are also sometimes called solve-bases; when deprotonated solvent, they are lyate ions. For example, water and ammonia undergo such dissociation into hydronium and hydroxide, and ammonium and amide, respectively: Some aprotic systems also undergo such dissociation, such as dinitrogen tetroxide into nitrosonium and nitrate, antimony trichloride into dichloroantimonium and tetrachloroantimonate, and phosgene into chlorocarboxonium and chloride: A solute that causes an increase in the concentration of the solvonium ions and a decrease in the concentration of solvate ions is defined as an acid. A solute that causes an increase in the concentration of the solvate ions and a decrease in the concentration of the solvonium ions is defined as a base. Thus, in liquid ammonia, (supplying ) is a strong base, and (supplying ) is a strong acid. In liquid sulfur dioxide (), thionyl compounds (supplying ) behave as acids, and sulfites (supplying ) behave as bases. The non-aqueous acid–base reactions in liquid ammonia are similar to the reactions in water: Nitric acid can be a base in liquid sulfuric acid: The unique strength of this definition shows in describing the reactions in aprotic solvents; for example, in liquid : Because the solvent system definition depends on the solute as well as on the solvent itself, a particular solute can be either an acid or a base depending on the choice of the solvent: is a strong acid in water, a weak acid in acetic acid, and a weak base in fluorosulfonic acid; this characteristic of the theory has been seen as both a strength and a weakness, because some substances (such as and ) have been seen to be acidic or basic on their own right. On the other hand, solvent system theory has been criticized as being too general to be useful. Also, it has been thought that there is something intrinsically acidic about hydrogen compounds, a property not shared by non-hydrogenic solvonium salts. Lux–Flood definition This acid–base theory was a revival of the oxygen theory of acids and bases proposed by German chemist Hermann Lux in 1939, further improved by Håkon Flood and is still used in modern geochemistry and electrochemistry of molten salts. This definition describes an acid as an oxide ion () acceptor and a base as an oxide ion donor. For example: This theory is also useful in the systematisation of the reactions of noble gas compounds, especially the xenon oxides, fluorides, and oxofluorides. Usanovich definition Mikhail Usanovich developed a general theory that does not restrict acidity to hydrogen-containing compounds, but his approach, published in 1938, was even more general than Lewis theory. Usanovich's theory can be summarized as defining an acid as anything that accepts negative species or donates positive ones, and a base as the reverse. This defined the concept of redox (oxidation-reduction) as a special case of acid–base reactions. Some examples of Usanovich acid–base reactions include: Rationalizing the strength of Lewis acid–base interactions HSAB theory In 1963, Ralph Pearson proposed a qualitative concept known as the Hard and Soft Acids and Bases principle. later made quantitative with help of Robert Parr in 1984. 'Hard' applies to species that are small, have high charge states, and are weakly polarizable. 'Soft' applies to species that are large, have low charge states and are strongly polarizable. Acids and bases interact, and the most stable interactions are hard–hard and soft–soft. This theory has found use in organic and inorganic chemistry. ECW model The ECW model created by Russell S. Drago is a quantitative model that describes and predicts the strength of Lewis acid base interactions, . The model assigned and parameters to many Lewis acids and bases. Each acid is characterized by an and a . Each base is likewise characterized by its own and . The and parameters refer, respectively, to the electrostatic and covalent contributions to the strength of the bonds that the acid and base will form. The equation is The term represents a constant energy contribution for acid–base reaction such as the cleavage of a dimeric acid or base. The equation predicts reversal of acids and base strengths. The graphical presentations of the equation show that there is no single order of Lewis base strengths or Lewis acid strengths. Acid–base equilibrium The reaction of a strong acid with a strong base is essentially a quantitative reaction. For example, In this reaction both the sodium and chloride ions are spectators as the neutralization reaction, does not involve them. With weak bases addition of acid is not quantitative because a solution of a weak base is a buffer solution. A solution of a weak acid is also a buffer solution. When a weak acid reacts with a weak base an equilibrium mixture is produced. For example, adenine, written as AH, can react with a hydrogen phosphate ion, . The equilibrium constant for this reaction can be derived from the acid dissociation constants of adenine and of the dihydrogen phosphate ion. The notation [X] signifies "concentration of X". When these two equations are combined by eliminating the hydrogen ion concentration, an expression for the equilibrium constant, is obtained. Acid–alkali reaction An acid–alkali reaction is a special case of an acid–base reaction, where the base used is also an alkali. When an acid reacts with an alkali salt (a metal hydroxide), the product is a metal salt and water. Acid–alkali reactions are also neutralization reactions. In general, acid–alkali reactions can be simplified to by omitting spectator ions. Acids are in general pure substances that contain hydrogen cations () or cause them to be produced in solutions. Hydrochloric acid () and sulfuric acid () are common examples. In water, these break apart into ions: The alkali breaks apart in water, yielding dissolved hydroxide ions: . See also Acid–base titration Deprotonation Donor number Electron configuration Gutmann–Beckett method Lewis structure Nucleophilic substitution Neutralization (chemistry) Protonation Redox reactions Resonance (chemistry) Notes References Sources External links Acid–base Physiology – an on-line text John W. Kimball's online biology book section of acid and bases. Acids Bases (chemistry) Chemical reactions Equilibrium chemistry Inorganic reactions
Acid–base reaction
[ "Chemistry" ]
4,201
[ "Acid–base chemistry", "Acids", "Inorganic reactions", "Equilibrium chemistry", "nan", "Bases (chemistry)" ]
3,054
https://en.wikipedia.org/wiki/Alpha%20helix
An alpha helix (or α-helix) is a sequence of amino acids in a protein that are twisted into a coil (a helix). The alpha helix is the most common structural arrangement in the secondary structure of proteins. It is also the most extreme type of local structure, and it is the local structure that is most easily predicted from a sequence of amino acids. The alpha helix has a right-handed helix conformation in which every backbone N−H group hydrogen bonds to the backbone C=O group of the amino acid that is four residues earlier in the protein sequence. Other names The alpha helix is also commonly called a: Pauling–Corey–Branson α-helix (from the names of three scientists who described its structure) 3.613-helix because there are 3.6 amino acids in one ring, with 13 atoms being involved in the ring formed by the hydrogen bond (starting with amidic hydrogen and ending with carbonyl oxygen) Discovery In the early 1930s, William Astbury showed that there were drastic changes in the X-ray fiber diffraction of moist wool or hair fibers upon significant stretching. The data suggested that the unstretched fibers had a coiled molecular structure with a characteristic repeat of ≈. Astbury initially proposed a linked-chain structure for the fibers. He later joined other researchers (notably the American chemist Maurice Huggins) in proposing that: the unstretched protein molecules formed a helix (which he called the α-form) the stretching caused the helix to uncoil, forming an extended state (which he called the β-form). Although incorrect in their details, Astbury's models of these forms were correct in essence and correspond to modern elements of secondary structure, the α-helix and the β-strand (Astbury's nomenclature was kept), which were developed by Linus Pauling, Robert Corey and Herman Branson in 1951 (see below); that paper showed both right- and left-handed helices, although in 1960 the crystal structure of myoglobin showed that the right-handed form is the common one. Hans Neurath was the first to show that Astbury's models could not be correct in detail, because they involved clashes of atoms. Neurath's paper and Astbury's data inspired H. S. Taylor, Maurice Huggins and Bragg and collaborators to propose models of keratin that somewhat resemble the modern α-helix. Two key developments in the modeling of the modern α-helix were: the correct bond geometry, thanks to the crystal structure determinations of amino acids and peptides and Pauling's prediction of planar peptide bonds; and his relinquishing of the assumption of an integral number of residues per turn of the helix. The pivotal moment came in the early spring of 1948, when Pauling caught a cold and went to bed. Being bored, he drew a polypeptide chain of roughly correct dimensions on a strip of paper and folded it into a helix, being careful to maintain the planar peptide bonds. After a few attempts, he produced a model with physically plausible hydrogen bonds. Pauling then worked with Corey and Branson to confirm his model before publication. In 1954, Pauling was awarded his first Nobel Prize "for his research into the nature of the chemical bond and its application to the elucidation of the structure of complex substances" (such as proteins), prominently including the structure of the α-helix. Structure Geometry and hydrogen bonding The amino acids in an α-helix are arranged in a right-handed helical structure where each amino acid residue corresponds to a 100° turn in the helix (i.e., the helix has 3.6 residues per turn), and a translation of along the helical axis. Dunitz describes how Pauling's first article on the theme in fact shows a left-handed helix, the enantiomer of the true structure. Short pieces of left-handed helix sometimes occur with a large content of achiral glycine amino acids, but are unfavorable for the other normal, biological -amino acids. The pitch of the alpha-helix (the vertical distance between consecutive turns of the helix) is , which is the product of 1.5 and 3.6. The most important thing is that the N-H group of one amino acid forms a hydrogen bond with the C=O group of the amino acid four residues earlier; this repeated i + 4 → i hydrogen bonding is the most prominent characteristic of an α-helix. Official international nomenclature specifies two ways of defining α-helices, rule 6.2 in terms of repeating φ, ψ torsion angles (see below) and rule 6.3 in terms of the combined pattern of pitch and hydrogen bonding. The α-helices can be identified in protein structure using several computational methods, such as DSSP (Define Secondary Structure of Protein). Similar structures include the 310 helix (i + 3 → i hydrogen bonding) and the π-helix (i + 5 → i hydrogen bonding). The α-helix can be described as a 3.613 helix, since the i + 4 spacing adds three more atoms to the H-bonded loop compared to the tighter 310 helix, and on average, 3.6 amino acids are involved in one ring of α-helix. The subscripts refer to the number of atoms (including the hydrogen) in the closed loop formed by the hydrogen bond. Residues in α-helices typically adopt backbone (φ, ψ) dihedral angles around (−60°, −45°), as shown in the image at right. In more general terms, they adopt dihedral angles such that the ψ dihedral angle of one residue and the φ dihedral angle of the next residue sum to roughly −105°. As a consequence, α-helical dihedral angles, in general, fall on a diagonal stripe on the Ramachandran diagram (of slope −1), ranging from (−90°, −15°) to (−70°, −35°). For comparison, the sum of the dihedral angles for a 310 helix is roughly −75°, whereas that for the π-helix is roughly −130°. The general formula for the rotation angle Ω per residue of any polypeptide helix with trans isomers is given by the equation The α-helix is tightly packed; there is almost no free space within the helix. The amino-acid side-chains are on the outside of the helix, and point roughly "downward" (i.e., toward the N-terminus), like the branches of an evergreen tree (Christmas tree effect). This directionality is sometimes used in preliminary, low-resolution electron-density maps to determine the direction of the protein backbone. Stability Helices observed in proteins can range from four to over forty residues long, but a typical helix contains about ten amino acids (about three turns). In general, short polypeptides do not exhibit much α-helical structure in solution, since the entropic cost associated with the folding of the polypeptide chain is not compensated for by a sufficient amount of stabilizing interactions. In general, the backbone hydrogen bonds of α-helices are considered slightly weaker than those found in β-sheets, and are readily attacked by the ambient water molecules. However, in more hydrophobic environments such as the plasma membrane, or in the presence of co-solvents such as trifluoroethanol (TFE), or isolated from solvent in the gas phase, oligopeptides readily adopt stable α-helical structure. Furthermore, crosslinks can be incorporated into peptides to conformationally stabilize helical folds. Crosslinks stabilize the helical state by entropically destabilizing the unfolded state and by removing enthalpically stabilized "decoy" folds that compete with the fully helical state. It has been shown that α-helices are more stable, robust to mutations and designable than β-strands in natural proteins, and also in artificially designed proteins. Visualization The 3 most popular ways of visualizing the alpha-helical secondary structure of oligopeptide sequences are (1) a helical wheel, (2) a wenxiang diagram, and (3) a helical net. Each of these can be visualized with various software packages and web servers. To generate a small number of diagrams, Heliquest can be used for helical wheels, and NetWheels can be used for helical wheels and helical nets. To programmatically generate a large number of diagrams, helixvis can be used to draw helical wheels and wenxiang diagrams in the R and Python programming languages. Experimental determination Since the α-helix is defined by its hydrogen bonds and backbone conformation, the most detailed experimental evidence for α-helical structure comes from atomic-resolution X-ray crystallography such as the example shown at right. It is clear that all the backbone carbonyl oxygens point downward (toward the C-terminus) but splay out slightly, and the H-bonds are approximately parallel to the helix axis. Protein structures from NMR spectroscopy also show helices well, with characteristic observations of nuclear Overhauser effect (NOE) couplings between atoms on adjacent helical turns. In some cases, the individual hydrogen bonds can be observed directly as a small scalar coupling in NMR. There are several lower-resolution methods for assigning general helical structure. The NMR chemical shifts (in particular of the Cα, Cβ and C′) and residual dipolar couplings are often characteristic of helices. The far-UV (170–250 nm) circular dichroism spectrum of helices is also idiosyncratic, exhibiting a pronounced double minimum at around 208 and 222 nm. Infrared spectroscopy is rarely used, since the α-helical spectrum resembles that of a random coil (although these might be discerned by, e.g., hydrogen-deuterium exchange). Finally, cryo electron microscopy is now capable of discerning individual α-helices within a protein, although their assignment to residues is still an active area of research. Long homopolymers of amino acids often form helices if soluble. Such long, isolated helices can also be detected by other methods, such as dielectric relaxation, flow birefringence, and measurements of the diffusion constant. In stricter terms, these methods detect only the characteristic prolate (long cigar-like) hydrodynamic shape of a helix, or its large dipole moment. Amino-acid propensities Different amino-acid sequences have different propensities for forming α-helical structure. Methionine, alanine, leucine, uncharged glutamate, lysine, and arginine ("MALEK-R" in the amino-acid 1-letter codes) all have especially high helix-forming propensities, whereas proline and glycine have poor helix-forming propensities. Proline either breaks or kinks a helix, both because it cannot donate an amide hydrogen bond (because it has none) and because its sidechain interferes sterically with the backbone of the preceding turn inside a helix, which forces a bend of about 30° in the helix's axis. However, proline is often the first residue of a helix, presumably due to its structural rigidity. At the other extreme, glycine also tends to disrupt helices because its high conformational flexibility makes it entropically expensive to adopt the relatively constrained α-helical structure. Table of standard amino acid alpha-helical propensities Estimated differences in free energy change, Δ(ΔG), estimated in kcal/mol per residue in an α-helical configuration, relative to alanine arbitrarily set as zero. Higher numbers (more positive free energy changes) are less favoured. Significant deviations from these average numbers are possible, depending on the identities of the neighbouring residues. {| class="wikitable sortable" |+Differences in free energy change per residue !rowspan=2| Amino acid !rowspan=2 class="unsortable"| 3-letter !rowspan=2 class="unsortable"| 1-letter !colspan=2| Helical penalty |- !kcal/mol !kJ/mol |- | Alanine | Ala | A | |- | Arginine | Arg | R | |- | Asparagine | Asn | N | |- | Aspartic acid | Asp | D | |- | Cysteine | Cys | C | |- | Glutamic acid | Glu | E | |- | Glutamine | Gln | Q | |- | Glycine | Gly | G | |- | Histidine | His | H | |- | Isoleucine | Ile | I | |- | Leucine | Leu | L | |- | Lysine | Lys | K | |- | Methionine | Met | M | |- | Phenylalanine | Phe | F | |- | Proline | Pro | P | |- | Serine | Ser | S | |- | Threonine | Thr | T | |- | Tryptophan | Trp | W | |- | Tyrosine | Tyr | Y | |- | Valine | Val | V | |} Dipole moment A helix has an overall dipole moment due to the aggregate effect of the individual microdipoles from the carbonyl groups of the peptide bond pointing along the helix axis. The effects of this macrodipole are a matter of some controversy. α-helices often occur with the N-terminal end bound by a negatively charged group, sometimes an amino acid side chain such as glutamate or aspartate, or sometimes a phosphate ion. Some regard the helix macrodipole as interacting electrostatically with such groups. Others feel that this is misleading and it is more realistic to say that the hydrogen bond potential of the free NH groups at the N-terminus of an α-helix can be satisfied by hydrogen bonding; this can also be regarded as set of interactions between local microdipoles such as . Coiled coils Coiled-coil α helices are highly stable forms in which two or more helices wrap around each other in a "supercoil" structure. Coiled coils contain a highly characteristic sequence motif known as a heptad repeat, in which the motif repeats itself every seven residues along the sequence (amino acid residues, not DNA base-pairs). The first and especially the fourth residues (known as the a and d positions) are almost always hydrophobic; the fourth residue is typically leucine this gives rise to the name of the structural motif called a leucine zipper, which is a type of coiled-coil. These hydrophobic residues pack together in the interior of the helix bundle. In general, the fifth and seventh residues (the e and g positions) have opposing charges and form a salt bridge stabilized by electrostatic interactions. Fibrous proteins such as keratin or the "stalks" of myosin or kinesin often adopt coiled-coil structures, as do several dimerizing proteins. A pair of coiled-coils a four-helix bundle is a very common structural motif in proteins. For example, it occurs in human growth hormone and several varieties of cytochrome. The Rop protein, which promotes plasmid replication in bacteria, is an interesting case in which a single polypeptide forms a coiled-coil and two monomers assemble to form a four-helix bundle. Facial arrangements The amino acids that make up a particular helix can be plotted on a helical wheel, a representation that illustrates the orientations of the constituent amino acids (see the article for leucine zipper for such a diagram). Often in globular proteins, as well as in specialized structures such as coiled-coils and leucine zippers, an α-helix will exhibit two "faces" one containing predominantly hydrophobic amino acids oriented toward the interior of the protein, in the hydrophobic core, and one containing predominantly polar amino acids oriented toward the solvent-exposed surface of the protein. Changes in binding orientation also occur for facially-organized oligopeptides. This pattern is especially common in antimicrobial peptides, and many models have been devised to describe how this relates to their function. Common to many of them is that the hydrophobic face of the antimicrobial peptide forms pores in the plasma membrane after associating with the fatty chains at the membrane core. Larger-scale assemblies Myoglobin and hemoglobin, the first two proteins whose structures were solved by X-ray crystallography, have very similar folds made up of about 70% α-helix, with the rest being non-repetitive regions, or "loops" that connect the helices. In classifying proteins by their dominant fold, the Structural Classification of Proteins database maintains a large category specifically for all-α proteins. Hemoglobin then has an even larger-scale quaternary structure, in which the functional oxygen-binding molecule is made up of four subunits. Functional roles DNA binding α-Helices have particular significance in DNA binding motifs, including helix-turn-helix motifs, leucine zipper motifs and zinc finger motifs. This is because of the convenient structural fact that the diameter of an α-helix is about including an average set of sidechains, about the same as the width of the major groove in B-form DNA, and also because coiled-coil (or leucine zipper) dimers of helices can readily position a pair of interaction surfaces to contact the sort of symmetrical repeat common in double-helical DNA. An example of both aspects is the transcription factor Max (see image at left), which uses a helical coiled coil to dimerize, positioning another pair of helices for interaction in two successive turns of the DNA major groove. Membrane spanning α-Helices are also the most common protein structure element that crosses biological membranes (transmembrane protein), presumably because the helical structure can satisfy all backbone hydrogen-bonds internally, leaving no polar groups exposed to the membrane if the sidechains are hydrophobic. Proteins are sometimes anchored by a single membrane-spanning helix, sometimes by a pair, and sometimes by a helix bundle, most classically consisting of seven helices arranged up-and-down in a ring such as for rhodopsins (see image at right) and other G protein–coupled receptors (GPCRs). The structural stability between pairs of α-Helical transmembrane domains rely on conserved membrane interhelical packing motifs, for example, the Glycine-xxx-Glycine (or small-xxx-small) motif. Mechanical properties α-Helices under axial tensile deformation, a characteristic loading condition that appears in many alpha-helix-rich filaments and tissues, results in a characteristic three-phase behavior of stiff-soft-stiff tangent modulus. Phase I corresponds to the small-deformation regime during which the helix is stretched homogeneously, followed by phase II, in which alpha-helical turns break mediated by the rupture of groups of H-bonds. Phase III is typically associated with large-deformation covalent bond stretching. Dynamical features Alpha-helices in proteins may have low-frequency accordion-like motion as observed by the Raman spectroscopy and analyzed via the quasi-continuum model. Helices not stabilized by tertiary interactions show dynamic behavior, which can be mainly attributed to helix fraying from the ends. Helix–coil transition Homopolymers of amino acids (such as polylysine) can adopt α-helical structure at low temperature that is "melted out" at high temperatures. This helix–coil transition was once thought to be analogous to protein denaturation. The statistical mechanics of this transition can be modeled using an elegant transfer matrix method, characterized by two parameters: the propensity to initiate a helix and the propensity to extend a helix. In art At least five artists have made explicit reference to the α-helix in their work: Julie Newdoll in painting and Julian Voss-Andreae, Bathsheba Grossman, Byron Rubin, and Mike Tyka in sculpture. San Francisco area artist Julie Newdoll, who holds a degree in microbiology with a minor in art, has specialized in paintings inspired by microscopic images and molecules since 1990. Her painting "Rise of the Alpha Helix" (2003) features human figures arranged in an α helical arrangement. According to the artist, "the flowers reflect the various types of sidechains that each amino acid holds out to the world". This same metaphor is also echoed from the scientist's side: "β sheets do not show a stiff repetitious regularity but flow in graceful, twisting curves, and even the α-helix is regular more in the manner of a flower stem, whose branching nodes show the influence of environment, developmental history, and the evolution of each part to match its own idiosyncratic function." Julian Voss-Andreae is a German-born sculptor with degrees in experimental physics and sculpture. Since 2001 Voss-Andreae creates "protein sculptures" based on protein structure with the α-helix being one of his preferred objects. Voss-Andreae has made α-helix sculptures from diverse materials including bamboo and whole trees. A monument Voss-Andreae created in 2004 to celebrate the memory of Linus Pauling, the discoverer of the α-helix, is fashioned from a large steel beam rearranged in the structure of the α-helix. The , bright-red sculpture stands in front of Pauling's childhood home in Portland, Oregon. Ribbon diagrams of α-helices are a prominent element in the laser-etched crystal sculptures of protein structures created by artist Bathsheba Grossman, such as those of insulin, hemoglobin, and DNA polymerase. Byron Rubin is a former protein crystallographer now professional sculptor in metal of proteins, nucleic acids, and drug molecules many of which featuring α-helices, such as subtilisin, human growth hormone, and phospholipase A2. Mike Tyka is a computational biochemist at the University of Washington working with David Baker. Tyka has been making sculptures of protein molecules since 2010 from copper and steel, including ubiquitin and a potassium channel tetramer. See also 310 helix Beta sheet Davydov soliton Folding (chemistry) Knobs into holes packing Pi helix References Further reading . External links NetSurfP ver. 1.1 – Protein Surface Accessibility and Secondary Structure Predictions α-helix rotational angle calculator Artist Julie Newdoll's website Artist Julian Voss-Andreae's website Protein structural motifs Helices
Alpha helix
[ "Biology" ]
4,818
[ "Protein structural motifs", "Protein classification" ]
3,072
https://en.wikipedia.org/wiki/Arcturus
|- bgcolor="#FFFAFA" | Note (category: variability): || H and K emission vary. Arcturus is the brightest star in the northern constellation of Boötes. With an apparent visual magnitude of −0.05, it is the fourth-brightest star in the night sky, and the brightest in the northern celestial hemisphere. The name Arcturus originated from ancient Greece; it was then cataloged as α Boötis by Johann Bayer in 1603, which is Latinized to Alpha Boötis. Arcturus forms one corner of the Spring Triangle asterism. Located relatively close at 36.7 light-years from the Sun, Arcturus is a red giant of spectral type K1.5III—an aging star around 7.1 billion years old that has used up its core hydrogen and evolved off the main sequence. It is about the same mass as the Sun, but has expanded to 25 times its size (around 35 million kilometers) and is around 170 times as luminous. Nomenclature The traditional name Arcturus is Latinised from the ancient Greek Ἀρκτοῦρος (Arktouros) and means "Guardian of the Bear", ultimately from ἄρκτος (arktos), "bear" and οὖρος (ouros), "watcher, guardian". The designation of Arcturus as α Boötis (Latinised to Alpha Boötis) was made by Johann Bayer in 1603. In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN's first bulletin of July 2016 included a table of the first two batches of names approved by the WGSN, which included Arcturus for α Boötis. Observation With an apparent visual magnitude of −0.05, Arcturus is the brightest star in the northern celestial hemisphere and the fourth-brightest star in the night sky, after Sirius (−1.46 apparent magnitude), Canopus (−0.72) and α Centauri (combined magnitude of −0.27). However, α Centauri AB is a binary star, whose components are each fainter than Arcturus. This makes Arcturus the third-brightest individual star, just ahead of α Centauri A (officially named Rigil Kentaurus), whose apparent magnitude . The French mathematician and astronomer Jean-Baptiste Morin observed Arcturus in the daytime with a telescope in 1635. This was the first recorded full daylight viewing for any star other than the Sun and supernovae. Arcturus has been seen at or just before sunset with the naked eye. Arcturus is visible from both of Earth's hemispheres as it is located 19° north of the celestial equator. The star culminates at midnight on 27 April, and at 9 p.m. on June 10 being visible during the late northern spring or the southern autumn. From the northern hemisphere, an easy way to find Arcturus is to follow the arc of the handle of the Big Dipper (or Plough in the UK). By continuing in this path, one can find Spica, "Arc to Arcturus, then spike (or speed on) to Spica". Together with the bright stars Spica and Regulus (or Denebola, depending on the source), Arcturus is part of the Spring Triangle asterism. With Cor Caroli, these four stars form the Great Diamond asterism. Ptolemy described Arcturus as subrufa ("slightly red"): it has a B-V color index of +1.23, roughly midway between Pollux (B-V +1.00) and Aldebaran (B-V +1.54). η Boötis, or Muphrid, is only 3.3 light-years distant from Arcturus, and would have a visual magnitude −2.5, about as bright as Jupiter at its brightest from Earth, whereas an observer on the former system would find Arcturus with a magnitude -5.0, slightly brighter than Venus as seen from Earth, but with an orangish color. Physical characteristics Based upon an annual parallax shift of 88.83 milliarcseconds, as measured by the Hipparcos satellite, Arcturus is from Earth. The parallax margin of error is 0.54 milliarcseconds, translating to a distance margin of error of ±. Because of its proximity, Arcturus has a high proper motion, two arcseconds a year, greater than any first magnitude star other than α Centauri. It is [[list of nearest giant stars|the second-closest giant star to Earth, after Pollux. Arcturus is moving rapidly () relative to the Sun, and is now almost at its closest point to the Sun. Closest approach will happen in about 4,000 years, when the star will be a few hundredths of a light-year closer to Earth than it is today. (In antiquity, Arcturus was closer to the centre of the constellation.) Arcturus is thought to be an old-disk star, and appears to be moving with a group of 52 other such stars, known as the Arcturus stream. With an absolute magnitude of −0.30, Arcturus is, together with Vega and Sirius, one of the most luminous stars in the Sun's neighborhood. It is about 110 times brighter than the Sun in visible light wavelengths, but this underestimates its strength as much of the light it gives off is in the infrared; total (bolometric) power output is about 180 times that of the Sun. With a near-infrared J band magnitude of −2.2, only Betelgeuse (−2.9) and R Doradus (−2.6) are brighter. The lower output in visible light is due to a lower efficacy as the star has a lower surface temperature than the Sun. There have been suggestions that Arcturus might be a member of a binary system with a faint, cool companion, but no companion has been directly detected. In the absence of a binary companion, the mass of Arcturus cannot be measured directly, but models suggest it is slightly greater than that of the Sun. Evolutionary matching to the observed physical parameters gives a mass of , while the oxygen isotope ratio for a first dredge-up star gives a mass of . The star, given its evolutionary state, is expected to have undergone significant mass loss in the past. The star displays magnetic activity that is heating the coronal structures, and it undergoes a solar-type magnetic cycle with a duration that is probably less than 14 years. A weak magnetic field has been detected in the photosphere with a strength of around half a gauss. The magnetic activity appears to lie along four latitudes and is rotationally modulated. Arcturus is estimated to be around 6 to 8.5 billion years old, but there is some uncertainty about its evolutionary status. Based upon the color characteristics of Arcturus, it is currently ascending the red-giant branch and will continue to do so until it accumulates a large enough degenerate helium core to ignite the helium flash. It has likely exhausted the hydrogen from its core and is now in its active hydrogen shell burning phase. However, Charbonnel et al. (1998) placed it slightly above the horizontal branch, and suggested it has already completed the helium flash stage. Spectrum Arcturus has evolved off the main sequence to the red giant branch, reaching an early K-type stellar classification. It is frequently assigned the spectral type of K0III, but in 1989 was used as the spectral standard for type K1.5III Fe−0.5, with the suffix notation indicating a mild underabundance of iron compared to typical stars of its type. As the brightest K-type giant in the sky, it has been the subject of multiple atlases with coverage from the ultraviolet to infrared. The spectrum shows a dramatic transition from emission lines in the ultraviolet to atomic absorption lines in the visible range and molecular absorption lines in the infrared. This is due to the optical depth of the atmosphere varying with wavelength. The spectrum shows very strong absorption in some molecular lines that are not produced in the photosphere but in a surrounding shell. Examination of carbon monoxide lines show the molecular component of the atmosphere extending outward to 2–3 times the radius of the star, with the chromospheric wind steeply accelerating to 35–40 km/s in this region. Astronomers term "metals" those elements with higher atomic numbers than helium. The atmosphere of Arcturus has an enrichment of alpha elements relative to iron but only about a third of solar metallicity. Arcturus is possibly a Population II star. Oscillations As one of the brightest stars in the sky, Arcturus has been the subject of a number of studies in the emerging field of asteroseismology. Belmonte and colleagues carried out a radial velocity (Doppler shift of spectral lines) study of the star in April and May 1988, which showed variability with a frequency of the order of a few microhertz (μHz), the highest peak corresponding to 4.3 μHz (2.7 days) with an amplitude of 60 ms−1, with a frequency separation of c. 5 μHz. They suggested that the most plausible explanation for the variability of Arcturus is stellar oscillations. Asteroseismological measurements allow direct calculation of the mass and radius, giving values of and . This form of modelling is still relatively inaccurate, but a useful check on other models. Search for planets Hipparcos satellite astrometry suggested that Arcturus is a binary star, with the companion about twenty times dimmer than the primary and orbiting close enough to be at the very limits of humans' current ability to make it out. Recent results remain inconclusive, but do support the marginal Hipparcos detection of a binary companion. In 1993, radial velocity measurements of Aldebaran, Arcturus and Pollux showed that Arcturus exhibited a long-period radial velocity oscillation, which could be interpreted as a substellar companion. This substellar object would be nearly 12 times the mass of Jupiter and be located roughly at the same orbital distance from Arcturus as the Earth is from the Sun, at 1.1 astronomical units. However, all three stars surveyed showed similar oscillations yielding similar companion masses, and the authors concluded that the variation was likely to be intrinsic to the star rather than due to the gravitational effect of a companion. So far no substellar companion has been confirmed. Mythology One astronomical tradition associates Arcturus with the mythology around Arcas, who was about to shoot and kill his own mother Callisto who had been transformed into a bear. Zeus averted their imminent tragic fate by transforming the boy into the constellation Boötes, called Arctophylax "bear guardian" by the Greeks, and his mother into Ursa Major (Greek: Arctos "the bear"). The account is given in Hyginus's Astronomy. Aratus in his Phaenomena said that the star Arcturus lay below the belt of Arctophylax, and according to Ptolemy in the Almagest it lay between his thighs. An alternative lore associates the name with the legend around Icarius, who gave the gift of wine to other men, but was murdered by them, because they had had no experience with intoxication and mistook the wine for poison. It is stated that Icarius became Arcturus while his dog, Maira, became Canicula (Procyon), although "Arcturus" here may be used in the sense of the constellation rather than the star. Cultural significance As one of the brightest stars in the sky, Arcturus has been significant to observers since antiquity. In ancient Mesopotamia, it was linked to the god Enlil, and also known as Shudun, "yoke", or SHU-PA of unknown derivation in the Three Stars Each Babylonian star catalogues and later MUL.APIN around 1100 BC. In ancient Greek, the star is found in ancient astronomical literature, e.g. Hesiod's Work and Days, circa 700 BC, as well as Hipparchus's and Ptolemy's star catalogs. The folk-etymology connecting the star name with the bears (Greek: ἄρκτος, arktos) was probably invented much later. It fell out of use in favour of Arabic names until it was revived in the Renaissance. Arcturus is also mentioned in Plato's "Laws" (844e) as a herald for the season of vintage, specifically figs and grapes. In Arabic, Arcturus is one of two stars called al-simāk "the uplifted ones" (the other is Spica). Arcturus is specified as السماك الرامح as-simāk ar-rāmiħ "the uplifted one of the lancer". The term Al Simak Al Ramih has appeared in Al Achsasi Al Mouakket catalogue (translated into Latin as Al Simak Lanceator). This has been variously romanized in the past, leading to obsolete variants such as Aramec and Azimech. For example, the name Alramih is used in Geoffrey Chaucer's A Treatise on the Astrolabe (1391). Another Arabic name is Haris-el-sema, from حارس السماء ħāris al-samā’ "the keeper of heaven". or حارس الشمال ħāris al-shamāl’ "the keeper of north". In Indian astronomy, Arcturus is called Swati or Svati (Devanagari स्वाति, Transliteration IAST svāti, svātī́), possibly 'su' + 'ati' ("great goer", in reference to its remoteness) meaning very beneficent. It has been referred to as "the real pearl" in Bhartṛhari's kāvyas. In Chinese astronomy, Arcturus is called Da Jiao (), because it is the brightest star in the Chinese constellation called Jiao Xiu (). Later it became a part of another constellation Kang Xiu (). The Wotjobaluk Koori people of southeastern Australia knew Arcturus as Marpean-kurrk, mother of Djuit (Antares) and another star in Boötes, Weet-kurrk (Muphrid). Its appearance in the north signified the arrival of the larvae of the wood ant (a food item) in spring. The beginning of summer was marked by the star's setting with the Sun in the west and the disappearance of the larvae. The people of Milingimbi Island in Arnhem Land saw Arcturus and Muphrid as man and woman, and took the appearance of Arcturus at sunrise as a sign to go and harvest rakia or spikerush. The Weilwan of northern New South Wales knew Arcturus as Guembila "red". Prehistoric Polynesian navigators knew Arcturus as Hōkūleʻa, the "Star of Joy". Arcturus is the zenith star of the Hawaiian Islands. Using Hōkūleʻa and other stars, the Polynesians launched their double-hulled canoes from Tahiti and the Marquesas Islands. Traveling east and north they eventually crossed the equator and reached the latitude at which Arcturus would appear directly overhead in the summer night sky. Knowing they had arrived at the exact latitude of the island chain, they sailed due west on the trade winds to landfall. If Hōkūleʻa could be kept directly overhead, they landed on the southeastern shores of the Big Island of Hawaii. For a return trip to Tahiti the navigators could use Sirius, the zenith star of that island. Since 1976, the Polynesian Voyaging Society's Hōkūleʻa has crossed the Pacific Ocean many times under navigators who have incorporated this wayfinding technique in their non-instrument navigation. Arcturus had several other names that described its significance to indigenous Polynesians. In the Society Islands, Arcturus, called Ana-tahua-taata-metua-te-tupu-mavae ("a pillar to stand by"), was one of the ten "pillars of the sky", bright stars that represented the ten heavens of the Tahitian afterlife. In Hawaii, the pattern of Boötes was called Hoku-iwa, meaning "stars of the frigatebird". This constellation marked the path for Hawaiʻiloa on his return to Hawaii from the South Pacific Ocean. The Hawaiians called Arcturus Hoku-leʻa. It was equated to the Tuamotuan constellation Te Kiva, meaning "frigatebird", which could either represent the figure of Boötes or just Arcturus. However, Arcturus may instead be the Tuamotuan star called Turu. The Hawaiian name for Arcturus as a single star was likely Hoku-leʻa, which means "star of gladness", or "clear star". In the Marquesas Islands, Arcturus was probably called Tau-tou and was the star that ruled the month approximating January. The Māori and Moriori called it Tautoru, a variant of the Marquesan name and a name shared with Orion's Belt. In Inuit astronomy, Arcturus is called the Old Man (Uttuqalualuk in Inuit languages) and The First Ones (Sivulliik in Inuit languages). The Miꞌkmaq of eastern Canada saw Arcturus as Kookoogwéss, the owl. Early-20th-century Armenian scientist Nazaret Daghavarian theorized that the star commonly referred to in Armenian folklore as Gutani astgh (Armenian: Գութանի աստղ; lit. star of the plow) was in fact Arcturus, as the constellation of Boötes was called "Ezogh" (Armenian: Եզող; lit. the person who is plowing) by Armenians. In popular culture In Ancient Rome, the star's celestial activity was supposed to portend tempestuous weather, and a personification of the star acts as narrator of the prologue to Plautus' comedy Rudens (circa 211 BC). The Kāraṇḍavyūha Sūtra, compiled at the end of the 4th century or beginning of the 5th century, names one of Avalokiteśvaras meditative absorptions as "The face of Arcturus". One of the possible etymologies offered for the name "Arthur" assumes that it is derived from "Arcturus" and that the late 5th to early 6th-century figure on whom the myth of King Arthur is based was originally named for the star. In the Middle Ages, Arcturus was considered a Behenian fixed star and attributed to the stone jasper and the plantain herb. Cornelius Agrippa listed its kabbalistic sign under the alternate name Alchameth. Arcturus's light was employed in the mechanism used to open the 1933 Chicago World's Fair. The star was chosen as it was thought that light from Arcturus had started its journey at about the time of the previous Chicago World's Fair in 1893 (at 36.7 light-years away, the light actually started in 1896). At the height of the American Civil War, President Abraham Lincoln observed Arcturus through a 9.6-inch refractor telescope when he visited the Naval Observatory in Washington, D.C., in August 1863. References Further reading External links SolStation.com entry K-type giants Suspected variables Hypothetical planetary systems Arcturus moving group Boötes Bootis, Alpha BD+19 2777 Bootis, 16 0541 124897 069673 5340 TIC objects
Arcturus
[ "Astronomy" ]
4,193
[ "Boötes", "Constellations" ]
3,076
https://en.wikipedia.org/wiki/Antares
Antares is the brightest star in the constellation of Scorpius. It has the Bayer designation α Scorpii, which is Latinised to Alpha Scorpii. Often referred to as "the heart of the scorpion", Antares is flanked by σ Scorpii and τ Scorpii near the center of the constellation. Distinctly reddish when viewed with the naked eye, Antares is a slow irregular variable star that ranges in brightness from an apparent visual magnitude of +0.6 down to +1.6. It is on average the fifteenth-brightest star in the night sky. Antares is the brightest and most evolved stellar member of the Scorpius–Centaurus association, the nearest OB association to the Sun. It is located about from Earth at the rim of the Upper Scorpius subgroup, and is illuminating the Rho Ophiuchi cloud complex in its foreground. Classified as spectral type M1.5Iab-Ib, Antares is a red supergiant, a large evolved massive star and one of the largest stars visible to the naked eye. If placed at the center of the Solar System, it would extend out to somewhere in the asteroid belt. Its mass is calculated to be around 13 or 15 to 16 times that of the Sun. Antares appears as a single star when viewed with the naked eye, but it is actually a binary star system, with its two components called α Scorpii A and α Scorpii B. The brighter of the pair is the red supergiant, while the fainter is a hot main sequence star of magnitude 5.5. They have a projected separation of about . Its traditional name Antares derives from the Ancient Greek , meaning "rival to Ares", due to the similarity of its reddish hue to the appearance of the planet Mars. Nomenclature α Scorpii (Latinised to Alpha Scorpii) is the star's Bayer designation. Antares has the Flamsteed designation 21 Scorpii, as well as catalogue designations such as HR 6134 in the Bright Star Catalogue and HD 148478 in the Henry Draper Catalogue. As a prominent infrared source, it appears in the Two Micron All-Sky Survey catalogue as 2MASS J16292443-2625549 and the Infrared Astronomical Satellite (IRAS) Sky Survey Atlas catalogue as IRAS 16262–2619. It is also catalogued as a double star WDS J16294-2626 and CCDM J16294-2626. Antares is a variable star and is listed in the General Catalogue of Variable Stars, but as a Bayer-designated star it does not have a separate variable star designation. Its traditional name Antares derives from the Ancient Greek , meaning "rival to Ares", due to the similarity of its reddish hue to the appearance of the planet Mars. The comparison of Antares with Mars may have originated with early Mesopotamian astronomers which is considered an outdated speculation, because the name of this star in Mesopotamian astronomy has always been "heart of Scorpion" and it was associated with the goddess Lisin. Some scholars have speculated that the star may have been named after Antar, or Antarah ibn Shaddad, the Arab warrior-hero celebrated in the pre-Islamic poems Mu'allaqat. However, the name "Antares" is already proven in the Greek culture, e.g. in Ptolemy's Almagest and Tetrabiblos. In 2016, the International Astronomical Union organised a Working Group on Star Names (WGSN) to catalog and standardise proper names for stars. The WGSN's first bulletin of July 2016 included a table of the first two batches of names approved by the WGSN, which included Antares for the star α Scorpii A. It is now so entered in the IAU Catalog of Star Names. Observation Antares is visible all night around May 31 of each year, when the star is at opposition to the Sun. Antares then rises at dusk and sets at dawn as seen at the equator. For two to three weeks on either side of November 30, Antares is not visible in the night sky from mid-northern latitudes, because it is near conjunction with the Sun. In higher northern latitudes, Antares is only visible low in the south in summertime. Higher than 64° northern latitude, the star does not rise at all. Antares is easier to see from the southern hemisphere due to its southerly declination. In the whole of Antarctica, the star is circumpolar as the whole continent is above 64° S latitude. History Radial velocity variations were observed in the spectrum of Antares in the early 20th century, and attempts were made to derive spectroscopic orbits. It became apparent that the small variations could not be due to orbital motion, and they were actually caused by pulsation of the star's atmosphere. Even in 1928, it was calculated that the size of the star must vary by about 20%. Antares was first reported to have a companion star by Johann Tobias Bürg during an occultation on April 13, 1819, although this was not widely accepted and dismissed as a possible atmospheric effect. It was then observed by Scottish astronomer James William Grant FRSE while in India on 23 July 1844. It was rediscovered by Ormsby M. Mitchel in 1846 and measured by William Rutter Dawes in April 1847. In 1952, Antares was reported to vary in brightness. A photographic magnitude range from 3.00 to 3.16 was described. The brightness has been monitored by the American Association of Variable Star Observers since 1945, and it has been classified as an LC slow irregular variable star, whose apparent magnitude slowly varies between extremes of +0.6 and +1.6, although usually near magnitude +1.0. There is no obvious periodicity, but statistical analyses have suggested periods of 1,733 days or days. No separate long secondary period has been detected, although it has been suggested that primary periods longer than a thousand days are analogous to long secondary periods. Research published in 2018 demonstrated that Ngarrindjeri Aboriginal people from South Australia observed the variability of Antares and incorporated it into their oral traditions as Waiyungari (meaning 'red man'). Occultations and conjunctions Antares is 4.57 degrees south of the ecliptic, one of four first magnitude stars within 6° of the ecliptic (the others are Spica, Regulus and Aldebaran), so it can be occulted by the Moon. The occultation of 31 July 2009 was visible in much of southern Asia and the Middle East. Every year around December 2 the Sun passes 5° north of Antares. Lunar occultations of Antares are fairly common, depending on the 18.6-year cycle of the lunar nodes. The last cycle ended in 2010 and the next begins in 2023. Shown at right is a video of a reappearance event, clearly showing events for both components. Antares can also be occulted by the planets, e.g. Venus, but these events are rare. The last occultation of Antares by Venus took place on September 17, 525 BC; the next one will be November 17, 2400. Other planets have been calculated not to have occulted Antares over the last millennium, nor will they in the next millennium, as most planets stay near the ecliptic and pass north of Antares. Venus will be extremely near Antares on October 19, 2117, and every eight years thereafter through to October 29, 2157, it will pass south of the star. Illumination of Rho Ophiuchi cloud complex Antares is the brightest and most evolved stellar member of the Scorpius–Centaurus association, the nearest OB association to the Sun. It is a member of the Upper Scorpius subgroup of the association, which contains thousands of stars with a mean age of 11 million years. Antares is located about from Earth at the rim of the Upper Scorpius subgroup, and is illuminating the Rho Ophiuchi cloud complex in its foreground. The illuminated cloud is sometimes referred to as the Antares Nebula or is otherwise identified as VdB 107. Stellar system α Scorpii is a double star that is thought to form a binary system. The best calculated orbit for the stars is still considered to be unreliable. It describes an almost circular orbit seen nearly edge-on, with a period of 1,218 years and a semi-major axis of about . Other recent estimates of the period have ranged from 880 years for a calculated orbit, to 2,562 years for a simple Kepler's Law estimate. Early measurements of the pair found them to be about apart in 1847–49, or apart in 1848. More modern observations consistently give separations around . The variations in the separation are often interpreted as evidence of orbital motion, but are more likely to be simply observational inaccuracies with very little true relative motion between the two components. The pair have a projected separation of about 529 astronomical units (AU) (≈ 80 billion km) at the estimated distance of Antares, giving a minimum value for the distance between them. Spectroscopic examination of the energy states in the outflow of matter from the companion star suggests that the latter is over beyond the primary (about 33 billion km). Antares Antares is a red supergiant star with a stellar classification of M1.5Iab-Ib, and is indicated to be a spectral standard for that class. Due to the nature of the star, the derived parallax measurements have large errors, so that the true distance of Antares is approximately from the Sun. The brightness of Antares at visual wavelengths is about 10,000 times that of the Sun, but because the star radiates a considerable part of its energy in the infrared part of the spectrum, the true bolometric luminosity is around 100,000 times that of the Sun. There is a large margin of error assigned to values for the bolometric luminosity, typically 30% or more. There is also considerable variation between values published by different authors, for example and published in 2012 and 2013. The mass of the star has been calculated to be about , or . Comparison of the effective temperature and luminosity of Antares to theoretical evolutionary tracks for massive stars suggest a progenitor mass of and an age of 12 million years (MYr), or an initial mass of and an age of 11 to 15 MYr. Comparison of observations from antiquity to theoretical evolutionary tracks suggests an initial mass of , or the possibility that Antares is on a blue loop with an initial mass of (while excluding as a possible mass estimate). These correspond to ages from 11.8 to 17.3 MYr. These initial mass estimates mean that Antares may have once resembled massive blue stars like the members of the Acrux system, which have similar initial masses (both Antares and Acrux are members of the wider Scorpius–Centaurus association). Massive stars like Antares are expected to explode as supernovae. Like most cool supergiants, Antares's size has much uncertainty due to the tenuous and translucent nature of the extended outer regions of the star. Defining an effective temperature is difficult due to spectral lines being generated at different depths in the atmosphere, and linear measurements produce different results depending on the wavelength observed. In addition, Antares pulsates in size, varying its radius by 19%. It also varies in temperature by 150 K, lagging 70 days behind radial velocity changes which are likely to be caused by the pulsations. The diameter of Antares can be measured most accurately using interferometry or observing lunar occultations events. An apparent diameter from occultations 41.3 ± 0.1 milliarcseconds has been published. Interferometry allows synthesis of a view of the stellar disc, which is then represented as a limb-darkened disk surrounded by an extended atmosphere. The diameter of the limb-darkened disk was measured as in 2009 and in 2010. The linear radius of the star can be calculated from its angular diameter and distance. However, the distance to Antares is not known with the same accuracy as modern measurements of its diameter. An estimate obtained by interferometry in 1925 by Francis G. Pease at the Mount Wilson Observatory gave Antares a diameter of , equal to approximately , making it the then largest star known. Antares is now known to be somewhat larger; for instance, the Hipparcos satellite's trigonometric parallax of with modern angular diameter estimates lead to a radius of about . Older radii estimates exceeding were derived from older measurements of the diameter, but those measurements are likely to have been affected by asymmetry of the atmosphere and the narrow range of infrared wavelengths observed; Antares has an extended shell which radiates strongly at those particular wavelengths. Despite its large size compared to the Sun, Antares is dwarfed by even larger red supergiants, such as VY Canis Majoris, KY Cygni, RW Cephei or Mu Cephei. Antares, like the similarly sized red supergiant Betelgeuse in the constellation Orion, will almost certainly explode as a supernova, probably in million years. For a few months, the Antares supernova could be as bright as the full moon and be visible in daytime. Antares B Antares B is a magnitude 5.5 blue-white main-sequence star of spectral type B2.5V; it also has numerous unusual spectral lines suggesting it has been polluted by matter ejected by Antares. It is assumed to be a relatively normal early-B main sequence star with a mass around , a temperature around , and a radius of about . As it falls short of the mass limit required for stars to undergo a supernova, it will likely expand into a red giant before dying as a massive white dwarf similar to Sirius B. Antares B is normally difficult to see in small telescopes due to glare from Antares, but can sometimes be seen in apertures over . It is often described as green, but this is probably either a contrast effect, or the result of the mixing of light from the two stars when they are seen together through a telescope and are too close to be completely resolved. Antares B can sometimes be observed with a small telescope for a few seconds during lunar occultations while Antares is hidden by the Moon. Antares B appears a profound blue or bluish-green color, in contrast to the orange-red Antares. Etymology and mythology In the Babylonian star catalogues dating from at least 1100 BCE, Antares was called GABA GIR.TAB, "the Breast of the Scorpion". In MUL.APIN, which dates between 1100 and 700 BC, it is one of the stars of Ea in the southern sky and denotes the breast of the Scorpion goddess Ishhara. Later names that translate as "the Heart of Scorpion" include from the Arabic . This had been directly translated from the Ancient Greek . was a calque of the Greek name rendered in Latin. In ancient Mesopotamia, Antares may have been known by various names: Urbat, Bilu-sha-ziri ("the Lord of the Seed"), Kak-shisa ("the Creator of Prosperity"), Dar Lugal ("The King"), Masu Sar ("the Hero and the King"), and Kakkab Bir ("the Vermilion Star"). In ancient Egypt, Antares represented the scorpion goddess Serket (and was the symbol of Isis in the pyramidal ceremonies). It was called "the red one of the prow". In Persia, Antares was known as one of the four "royal stars". In India, it with σ Scorpii and τ Scorpii were Jyeshthā (the eldest or biggest, probably attributing its huge size), one of the nakshatra (Hindu lunar mansions). The ancient Chinese called Antares 心宿二 (Xīnxiù'èr, "second star of the Heart"), because it was the second star of the mansion Xin (心). It was the national star of the Shang dynasty, and it was sometimes referred to as () because of its reddish appearance. The Māori people of New Zealand call Antares Rēhua, and regard it as the chief of all the stars especially the Matariki. Rēhua is father of Puanga/Puaka (Rigel), an important star in the calculation of the Māori calendar. The Wotjobaluk Koori people of Victoria, Australia, knew Antares as Djuit, son of Marpean-kurrk (Arcturus); the stars on each side represented his wives. The Kulin Kooris saw Antares (Balayang) as the brother of Bunjil (Altair). In culture Antares appears in the flag of Brazil, which displays 27 stars, each representing a federated unit of Brazil. Antares represents the state of Piauí. The 1995 Oldsmobile Antares concept car is named after the star. Antares is one of the medieval Behenian fixed stars. References Further reading External links Best Ever Image of a Star’s Surface and Atmosphere – First map of motion of material on a star other than the Sun M-type supergiants B-type main-sequence stars Binary stars Slow irregular variables Population I stars Upper Scorpius Scorpius 6134 Scorpii, Alpha CD-26 11359 Scorpii, 21 148478 9 080763 TIC objects
Antares
[ "Astronomy" ]
3,665
[ "Scorpius", "Constellations" ]
3,077
https://en.wikipedia.org/wiki/Aldebaran
Aldebaran () (Proto-Semitic *dVbr- “bee”) is a star located in the zodiac constellation of Taurus. It has the Bayer designation α Tauri, which is Latinized to Alpha Tauri and abbreviated Alpha Tau or α Tau. Aldebaran varies in brightness from an apparent visual magnitude 0.75 down to 0.95, making it the brightest star in the constellation, as well as (typically) the fourteenth-brightest star in the night sky. It is positioned at a distance of approximately 65 light-years from the Sun. The star lies along the line of sight to the nearby Hyades cluster. Aldebaran is a red giant, meaning that it is cooler than the Sun with a surface temperature of , but its radius is about 45 times the Sun's, so it is over 400 times as luminous. As a giant star, it has moved off the main sequence on the Hertzsprung–Russell diagram after depleting its supply of hydrogen in the core. The star spins slowly and takes 520 days to complete a rotation. Aldebaran is believed to host a planet several times the mass of Jupiter, named . Nomenclature The traditional name Aldebaran derives from the Arabic (), meaning , because it seems to follow the Pleiades. In 2016, the International Astronomical Union Working Group on Star Names (WGSN) approved the proper name Aldebaran for this star. Aldebaran is the brightest star in the constellation Taurus, with the Bayer designation α Tauri, latinised as Alpha Tauri. It has the Flamsteed designation 87 Tauri as the 87th star in the constellation of approximately 7th magnitude or brighter, ordered by right ascension. It also has the Bright Star Catalogue number 1457, the HD number 29139, and the Hipparcos catalogue number 21421, mostly seen in scientific publications. It is a variable star listed in the General Catalogue of Variable Stars, but it is listed using its Bayer designation and does not have a separate variable star designation. Aldebaran and several nearby stars are included in double star catalogues such as the Washington Double Star Catalog as WDS 04359+1631 and the Aitken Double Star Catalogue as ADS 3321. It was included with an 11th-magnitude companion as a double star as H IV 66 in the Herschel Catalogue of Double Stars and Σ II 2 in the Struve Double Star Catalog, and together with a 14th-magnitude star as β 550 in the Burnham Double Star Catalogue. Observation Aldebaran is one of the easiest stars to find in the night sky, partly due to its brightness and partly due to being near one of the more noticeable asterisms in the sky. Following the three stars of Orion's belt in the direction opposite to Sirius, the first bright star encountered is Aldebaran. It is best seen at midnight between late November and early December. The star is, by chance, in the line of sight between the Earth and the Hyades, so it has the appearance of being the brightest member of the open cluster, but the cluster that forms the bull's-head-shaped asterism is more than twice as far away, at about 150 light years. Aldebaran is 5.47 degrees south of the ecliptic and so can be occulted by the Moon. Such occultations occur when the Moon's ascending node is near the autumnal equinox. A series of 49 occultations occurred starting on 29 January 2015 and ending at 3 September 2018. Each event was visible from points in the northern hemisphere or close to the equator; people in e.g. Australia or South Africa can never observe an Aldebaran occultation since it is too far south of the ecliptic. A reasonably accurate estimate for the diameter of Aldebaran was obtained during the occultation of 22 September 1978. In the 2020s, Aldebaran is in conjunction in ecliptic longitude with the sun around May 30 of each year. With a near-infrared J band magnitude of −2.1, only Betelgeuse (−2.9), R Doradus (−2.6), and Arcturus (−2.2) are brighter at that wavelength. Observational history On 11 March AD 509, a lunar occultation of Aldebaran was observed in Athens, Greece. English astronomer Edmund Halley studied the timing of this event, and in 1718 concluded that Aldebaran must have changed position since that time, moving several minutes of arc further to the north. This, as well as observations of the changing positions of stars Sirius and Arcturus, led to the discovery of proper motion. Based on present day observations, the position of Aldebaran has shifted 7′ in the last 2000 years; roughly a quarter the diameter of the full moon. Due to precession of the equinoxes, 5,000 years ago the vernal equinox was close to Aldebaran. Between 420,000 and 210,000 years ago, Aldebaran was the brightest star in the night sky, peaking in brightness 320,000 years ago with an apparent magnitude of . English astronomer William Herschel discovered a faint companion to Aldebaran in 1782; an 11th-magnitude star at an angular separation of 117″. This star was shown to be itself a close double star by S. W. Burnham in 1888, and he discovered an additional 14th-magnitude companion at an angular separation of 31″. Follow-on measurements of proper motion showed that Herschel's companion was diverging from Aldebaran, and hence they were not physically connected. However, the companion discovered by Burnham had almost exactly the same proper motion as Aldebaran, suggesting that the two formed a wide binary star system. Working at his private observatory in Tulse Hill, England, in 1864 William Huggins performed the first studies of the spectrum of Aldebaran, where he was able to identify the lines of nine elements, including iron, sodium, calcium, and magnesium. In 1886, Edward C. Pickering at the Harvard College Observatory used a photographic plate to capture fifty absorption lines in the spectrum of Aldebaran. This became part of the Draper Catalogue, published in 1890. By 1887, the photographic technique had improved to the point that it was possible to measure a star's radial velocity from the amount of Doppler shift in the spectrum. By this means, the recession velocity of Aldebaran was estimated as (48 km/s), using measurements performed at Potsdam Observatory by Hermann C. Vogel and his assistant Julius Scheiner. Aldebaran was observed using an interferometer attached to the Hooker Telescope at the Mount Wilson Observatory in 1921 in order to measure its angular diameter, but it was not resolved in these observations. The extensive history of observations of Aldebaran led to it being included in the list of 33 stars chosen as benchmarks for the Gaia mission to calibrate derived stellar parameters. It had previously been used to calibrate instruments on board the Hubble Space Telescope. Physical characteristics Aldebaran is listed as the spectral standard for type K5+ III stars. Its spectrum shows that it is a giant star that has evolved off the main sequence band of the HR diagram after exhausting the hydrogen at its core. The collapse of the center of the star into a degenerate helium core has ignited a shell of hydrogen outside the core and Aldebaran is now on the red giant branch (RGB). The effective temperature of Aldebaran's photosphere is . It has a surface gravity of , typical for a giant star, but around 25 times lower than the Earth's and 700 times lower than the Sun's. Its metallicity is about 30% lower than the Sun's. Measurements by the Hipparcos satellite and other sources put Aldebaran around away. Asteroseismology has determined that it is about 16% more massive than the Sun, yet it shines with 518 times the Sun's luminosity due to the expanded radius. The angular diameter of Aldebaran has been measured many times. The value adopted as part of the Gaia benchmark calibration is . It is 44 times the diameter of the Sun, approximately 61 million kilometres. Aldebaran is a slightly variable star, assigned to the slow irregular type LB. The General Catalogue of Variable Stars indicates variation between apparent magnitude 0.75 and 0.95 from historical reports. Modern studies show a smaller amplitude, with some showing almost no variation. Hipparcos photometry shows an amplitude of only about 0.02 magnitudes and a possible period around 18 days. Intensive ground-based photometry showed variations of up to 0.03 magnitudes and a possible period around 91 days. Analysis of observations over a much longer period still find a total amplitude likely to be less than 0.1 magnitudes, and the variation is considered to be irregular. The photosphere shows abundances of carbon, oxygen, and nitrogen that suggest the giant has gone through its first dredge-up stage—a normal step in the evolution of a star into a red giant during which material from deep within the star is brought up to the surface by convection. With its slow rotation, Aldebaran lacks a dynamo needed to generate a corona and hence is not a source of hard X-ray emission. However, small scale magnetic fields may still be present in the lower atmosphere, resulting from convection turbulence near the surface. The measured strength of the magnetic field on Aldebaran is . Any resulting soft X-ray emissions from this region may be attenuated by the chromosphere, although ultraviolet emission has been detected in the spectrum. The star is currently losing mass at a rate of (about one Earth mass in 300,000 years) with a velocity of . This stellar wind may be generated by the weak magnetic fields in the lower atmosphere. Beyond the chromosphere of Aldebaran is an extended molecular outer atmosphere (MOLsphere) where the temperature is cool enough for molecules of gas to form. This region lies at about 2.5 times the radius of the star and has a temperature of about . The spectrum reveals lines of carbon monoxide, water, and titanium oxide. Outside the MOLSphere, the stellar wind continues to expand until it reaches the termination shock boundary with the hot, ionized interstellar medium that dominates the Local Bubble, forming a roughly spherical astrosphere with a radius of around , centered on Aldebaran. Visual companions Five faint stars appear close to Aldebaran in the sky. These double star components were given upper-case Latin letter designations more or less in the order of their discovery, with the letter A reserved for the primary star. Some characteristics of these components, including their position relative to Aldebaran, are shown in the table. Some surveys, for example Gaia Data Release 2, have indicated that Alpha Tauri B may have about the same proper motion and parallax as Aldebaran and thus may be a physical binary system. These measurements are difficult, since the dim B component appears so close to the bright primary star, and the margin of error is too large to establish (or exclude) a physical relationship between the two. So far neither the B component, nor anything else, has been unambiguously shown to be physically associated with Aldebaran. The Gaia Data Release 3 again suggest a close distance to Aldebaran and similar proper motions. With a parallax of 47.25 milliarcseconds, this translates into a distance of . The NASA Exoplanet Archive recognizes Aldebaran as a binary star, with Aldebaran B being the secondary star. A spectral type of M2.5 has been published for Alpha Tauri B. Alpha Tauri CD is a binary system with the C and D component stars gravitationally bound to and co-orbiting each other. These co-orbiting stars have been shown to be located far beyond Aldebaran and are members of the Hyades star cluster. As with the rest of the stars in the cluster they do not physically interact with Aldebaran in any way. Planetary system In 1993 radial velocity measurements of Aldebaran, Arcturus and Pollux showed that Aldebaran exhibited a long-period radial velocity oscillation, which could be interpreted as a substellar companion. The measurements for Aldebaran implied a companion with a minimum mass 11.4 times that of Jupiter in a 643-day orbit at a separation of in a mildly eccentric orbit. However, all three stars surveyed showed similar oscillations yielding similar companion masses, and the authors concluded that the variation was likely to be intrinsic to the star rather than due to the gravitational effect of a companion. In 2015 a study showed stable long-term evidence for both a planetary companion and stellar activity. An asteroseismic analysis of the residuals to the planet fit has determined that Aldebaran b has a minimum mass of Jupiter masses, and that when the star was on the main sequence it would have given this planet Earth-like levels of illumination and therefore, potentially, temperature. This would place it and any of its moons in the habitable zone. Follow-up study in 2019 have found the evidence for planetary existence inconclusive though. Etymology and mythology Aldebaran was originally ( in Arabic), meaning , since it follows the Pleiades; in fact, the Arabs sometimes also applied‍ the name to the Hyades as a whole. A variety of transliterated spellings have been used, with the current Aldebaran becoming standard relatively recently. Mythology This easily seen and striking star in its suggestive asterism is a popular subject for ancient and modern myths. Mexican culture: For the Seris of northwestern Mexico, this star provides light for the seven women giving birth (Pleiades). It has three names: , , and (). The lunar month corresponding to October is called . Australian Aboriginal culture: amongst indigenous people of the Clarence River, in north-eastern New South Wales, this star is the ancestor Karambal, who stole another man's wife. The woman's husband tracked him down and burned the tree in which he was hiding. It is believed that he rose to the sky as smoke and became the star Aldebaran. Persian culture: Aldebaran is considered one of the 4 "royal stars". Names in other languages In Indian astronomy it is identified as the lunar station Rohini. In Hindu mythology, Rohini is one of the twenty-seven daughters of the sage-king Daksha and Asikni, and the favourite wife of the moon god, Chandra. In Ancient Greek it has been called , literally or . In Chinese, (), meaning , refers to an asterism consisting of Aldebaran, ε Tauri, δ3 Tauri, δ1 Tauri, γ Tauri, 71 Tauri and λ Tauri. Consequently, the Chinese name for Aldebaran itself is (), . In Hawaiian, the star is named Kapuahi. In Biblical Hebrew, עָשׁ (ʿāš) in Job 9:9 and עַ֫יִשׁ (ʿayiš) in Job 38:32 have been identified with it and translated accordingly in English versions such as NJPS and REB. In modern culture As the brightest star in a Zodiac constellation, it is given great significance within astrology. Irish singer and composer Enya has a piece released on her eponymous album in 1986, which lyricist Roma Ryan titled Aldebaran after the star in Taurus. The name Aldebaran or Alpha Tauri has been adopted many times, including Aldebaran Rock in Antarctica United States Navy stores ship and proposed micro-satellite launch vehicle Aldebaran French company Aldebaran Robotics Fashion brand AlphaTauri Formula 1 team Scuderia AlphaTauri, active from to , previously known as Toro Rosso One of the chariot race horses owned by Sheikh Ilderim in the movie Ben-Hur The star also appears in works of fiction such as Far from the Madding Crowd (1874) and Down and Out in Paris and London (1933). It is frequently seen in science fiction, including the Lensman series (1948–1954), Fallen Dragon (2001) and passingly in Kim Stanley Robinson's "Blue Mars" (1996). Aldebaran is associated with Hastur, also known as The King in Yellow, in the horror stories of Robert W. Chambers. Aldebaran regularly features in conspiracy theories as one of the origins of extraterrestrial aliens, often linked to Nazi UFOs. A well-known example is the German conspiracy theorist Axel Stoll, who considered the star the home of the Aryan race and the target of expeditions by the Wehrmacht. The planetary exploration probe Pioneer 10 is no longer powered or in contact with Earth, but its trajectory is taking it in the general direction of Aldebaran. It is expected to make its closest approach in about two million years. The Austrian chemist Carl Auer von Welsbach proposed the name aldebaranium (chemical symbol Ad) for a rare earth element that he (among others) had found. Today, it is called ytterbium (symbol Yb). See also Lists of stars List of brightest stars List of nearest bright stars Historical brightest stars Taurus (Chinese astronomy) References External links Daytime occultation of Aldebaran by the Moon (Moscow, Russia) YouTube video K-type giants Slow irregular variables Hypothetical planetary systems Taurus (constellation) Tauri, Alpha 1457 BD+16 0629 Tauri, 087 0171.1 029139 021421 Stars with proper names 245873777
Aldebaran
[ "Astronomy" ]
3,699
[ "Taurus (constellation)", "Constellations" ]
3,078
https://en.wikipedia.org/wiki/Altair
Altair is the brightest star in the constellation of Aquila and the twelfth-brightest star in the night sky. It has the Bayer designation Alpha Aquilae, which is Latinised from α Aquilae and abbreviated Alpha Aql or α Aql. Altair is an A-type main-sequence star with an apparent visual magnitude of 0.77 and is one of the vertices of the Summer Triangle asterism; the other two vertices are marked by Deneb and Vega. It is located at a distance of from the Sun. Altair is currently in the G-cloud—a nearby interstellar cloud, an accumulation of gas and dust. Altair rotates rapidly, with a velocity at the equator of approximately 286 km/s. This is a significant fraction of the star's estimated breakup speed of 400 km/s. A study with the Palomar Testbed Interferometer revealed that Altair is not spherical, but is flattened at the poles due to its high rate of rotation. Other interferometric studies with multiple telescopes, operating in the infrared, have imaged and confirmed this phenomenon. Nomenclature α Aquilae (Latinised to Alpha Aquilae) is the star's Bayer designation. The traditional name Altair has been used since medieval times. It is an abbreviation of the Arabic phrase Al-Nisr Al-Ṭa'ir, "". In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN's first bulletin of July 2016 included a table of the first two batches of names approved by the WGSN, which included Altair for this star. It is now so entered in the IAU Catalog of Star Names. Physical characteristics Along with β Aquilae and γ Aquilae, Altair forms the well-known line of stars sometimes referred to as the Family of Aquila or Shaft of Aquila. Altair is a type-A main-sequence star with about 1.8 times the mass of the Sun and 11 times its luminosity. It is thought to be a young star close to the zero age main sequence at about 100 million years old, although previous estimates gave an age closer to one billion years old. Altair rotates rapidly, with a rotational period of under eight hours; for comparison, the equator of the Sun makes a complete rotation in a little more than 25 days, but Altair's rotation is similar to, and slightly faster than, those of Jupiter and Saturn. Like those two planets, its rapid rotation causes the star to be oblate; its equatorial diameter is over 20 percent greater than its polar diameter. Satellite measurements made in 1999 with the Wide Field Infrared Explorer showed that the brightness of Altair fluctuates slightly, varying by just a few thousandths of a magnitude with several different periods less than 2 hours. As a result, it was identified in 2005 as a Delta Scuti variable star. Its light curve can be approximated by adding together a number of sine waves, with periods that range between 0.8 and 1.5 hours. It is a weak source of coronal X-ray emission, with the most active sources of emission being located near the star's equator. This activity may be due to convection cells forming at the cooler equator. Rotational effects The angular diameter of Altair was measured interferometrically by R. Hanbury Brown and his co-workers at Narrabri Observatory in the 1960s. They found a diameter of 3milliarcseconds. Although Hanbury Brown et al. realized that Altair would be rotationally flattened, they had insufficient data to experimentally observe its oblateness. Later, using infrared interferometric measurements made by the Palomar Testbed Interferometer in 1999 and 2000, Altair was found to be flattened. This work was published by G. T. van Belle, David R. Ciardi and their co-authors in 2001. Theory predicts that, owing to Altair's rapid rotation, its surface gravity and effective temperature should be lower at the equator, making the equator less luminous than the poles. This phenomenon, known as gravity darkening or the von Zeipel effect, was confirmed for Altair by measurements made by the Navy Precision Optical Interferometer in 2001, and analyzed by Ohishi et al. (2004) and Peterson et al. (2006). Also, A. Domiciano de Souza et al. (2005) verified gravity darkening using the measurements made by the Palomar and Navy interferometers, together with new measurements made by the VINCI instrument at the VLTI. Altair is one of the few stars for which a direct image has been obtained. In 2006 and 2007, J. D. Monnier and his coworkers produced an image of Altair's surface from 2006 infrared observations made with the MIRC instrument on the CHARA array interferometer; this was the first time the surface of any main-sequence star, apart from the Sun, had been imaged. The false-color image was published in 2007. The equatorial radius of the star was estimated to be 2.03 solar radii, and the polar radius 1.63 solar radii—a 25% increase of the stellar radius from pole to equator. The polar axis is inclined by about 60° to the line of sight from the Earth. Etymology, mythology and culture The term Al Nesr Al Tair appeared in Al Achsasi al Mouakket's catalogue, which was translated into Latin as Vultur Volans. This name was applied by the Arabs to the asterism of Altair, β Aquilae and γ Aquilae and probably goes back to the ancient Babylonians and Sumerians, who called Altair "the eagle star". The spelling Atair has also been used. Medieval astrolabes of England and Western Europe depicted Altair and Vega as birds. The Koori people of Victoria also knew Altair as Bunjil, the wedge-tailed eagle, and β and γ Aquilae are his two wives the black swans. The people of the Murray River knew the star as Totyerguil. The Murray River was formed when Totyerguil the hunter speared Otjout, a giant Murray cod, who, when wounded, churned a channel across southern Australia before entering the sky as the constellation Delphinus. In Chinese belief, the asterism consisting of Altair, β Aquilae and γ Aquilae is known as Hé Gǔ (; lit. "river drum"). The Chinese name for Altair is thus Hé Gǔ èr (; lit. "river drum two", meaning the "second star of the drum at the river"). However, Altair is better known by its other names: Qiān Niú Xīng ( / ) or Niú Láng Xīng (), translated as the cowherd star. These names are an allusion to a love story, The Cowherd and the Weaver Girl, in which Niulang (represented by Altair) and his two children (represented by β Aquilae and γ Aquilae) are separated from respectively their wife and mother Zhinu (represented by Vega) by the Milky Way. They are only permitted to meet once a year, when magpies form a bridge to allow them to cross the Milky Way. The people of Micronesia called Altair Mai-lapa, meaning "big/old breadfruit", while the Māori people called this star Poutu-te-rangi, meaning "pillar of heaven". In Western astrology, the star was ill-omened, portending danger from reptiles. This star is one of the asterisms used by Bugis sailors for navigation, called bintoéng timoro, meaning "eastern star". A group of Japanese scientists sent a radio signal to Altair in 1983 with the hopes of contacting extraterrestrial life. NASA announced Altair as the name of the Lunar Surface Access Module (LSAM) on December 13, 2007. The Russian-made Beriev Be-200 Altair seaplane is also named after the star. Visual companions The bright primary star has the multiple star designation WDS 19508+0852A and has several faint visual companion stars, WDS 19508+0852B, C, D, E, F and G. All are much more distant than Altair and not physically associated. See also Lists of stars List of brightest stars List of nearest bright stars Historical brightest stars List of most luminous stars Notes References External links Star with Midriff Bulge Eyed by Astronomers, JPL press release, July 25, 2001. Spectrum of Altair Imaging the Surface of Altair, University of Michigan news release detailing the CHARA array direct imaging of the stellar surface in 2007. PIA04204: Altair, NASA. Image of Altair from the Palomar Testbed Interferometer. Altair, SolStation. Secrets of Sun-like star probed, BBC News, June 1, 2007. Astronomers Capture First Images of the Surface Features of Altair , Astromart.com Image of Altair from Aladin. Aquila (constellation) A-type main-sequence stars 4 Aquilae, 53 Aquilae, Alpha 187642 097649 7557 Delta Scuti variables Altair BD+08 4236 G-Cloud Astronomical objects known since antiquity 0768 TIC objects
Altair
[ "Astronomy" ]
1,966
[ "Aquila (constellation)", "Multiple stars", "Sky regions", "Constellations" ]
3,090
https://en.wikipedia.org/wiki/Arithmetic%E2%80%93geometric%20mean
In mathematics, the arithmetic–geometric mean (AGM or agM) of two positive real numbers and is the mutual limit of a sequence of arithmetic means and a sequence of geometric means. The arithmetic–geometric mean is used in fast algorithms for exponential, trigonometric functions, and other special functions, as well as some mathematical constants, in particular, computing . The AGM is defined as the limit of the interdependent sequences and . Assuming , we write:These two sequences converge to the same number, the arithmetic–geometric mean of and ; it is denoted by , or sometimes by or . The arithmetic–geometric mean can be extended to complex numbers and, when the branches of the square root are allowed to be taken inconsistently, generally it is a multivalued function. Example To find the arithmetic–geometric mean of and , iterate as follows:The first five iterations give the following values: The number of digits in which and agree (underlined) approximately doubles with each iteration. The arithmetic–geometric mean of 24 and 6 is the common limit of these two sequences, which is approximately . History The first algorithm based on this sequence pair appeared in the works of Lagrange. Its properties were further analyzed by Gauss. Properties Both the geometric mean and arithmetic mean of two positive numbers and are between the two numbers. (They are strictly between when .) The geometric mean of two positive numbers is never greater than the arithmetic mean. So the geometric means are an increasing sequence ; the arithmetic means are a decreasing sequence ; and for any . These are strict inequalities if . is thus a number between and ; it is also between the geometric and arithmetic mean of and . If then . There is an integral-form expression for :where is the complete elliptic integral of the first kind:Since the arithmetic–geometric process converges so quickly, it provides an efficient way to compute elliptic integrals, which are used, for example, in elliptic filter design. The arithmetic–geometric mean is connected to the Jacobi theta function bywhich upon setting gives Related concepts The reciprocal of the arithmetic–geometric mean of 1 and the square root of 2 is Gauss's constant.In 1799, Gauss proved thatwhere is the lemniscate constant. In 1941, (and hence ) was proved transcendental by Theodor Schneider. The set is algebraically independent over , but the set (where the prime denotes the derivative with respect to the second variable) is not algebraically independent over . In fact,The geometric–harmonic mean GH can be calculated using analogous sequences of geometric and harmonic means, and in fact . The arithmetic–harmonic mean is equivalent to the geometric mean. The arithmetic–geometric mean can be used to compute – among others – logarithms, complete and incomplete elliptic integrals of the first and second kind, and Jacobi elliptic functions. Proof of existence The inequality of arithmetic and geometric means implies thatand thusthat is, the sequence is nondecreasing and bounded above by the larger of and . By the monotone convergence theorem, the sequence is convergent, so there exists a such that:However, we can also see that: and so: Q.E.D. Proof of the integral-form expression This proof is given by Gauss. Let Changing the variable of integration to , where This yields gives Thus, we have The last equality comes from observing that . Finally, we obtain the desired result Applications The number π According to the Gauss–Legendre algorithm, where with and , which can be computed without loss of precision using Complete elliptic integral K(sinα) Taking and yields the AGM where is a complete elliptic integral of the first kind: That is to say that this quarter period may be efficiently computed through the AGM, Other applications Using this property of the AGM along with the ascending transformations of John Landen, Richard P. Brent suggested the first AGM algorithms for the fast evaluation of elementary transcendental functions (, , ). Subsequently, many authors went on to study the use of the AGM algorithms. See also Landen's transformation Gauss–Legendre algorithm Generalized mean References Notes Citations Sources Means Special functions Elliptic functions Articles containing proofs
Arithmetic–geometric mean
[ "Physics", "Mathematics" ]
866
[ "Means", "Mathematical analysis", "Point (geometry)", "Special functions", "Geometric centers", "Combinatorics", "Articles containing proofs", "Symmetry" ]
3,093
https://en.wikipedia.org/wiki/Alioth
Alioth , also called Epsilon Ursae Majoris, is a star in the northern constellation of Ursa Major. The designation is Latinised from ε Ursae Majoris and abbreviated Epsilon UMa or ε UMa. Despite being designated "ε" (epsilon), it is the brightest star in the constellation and at magnitude 1.77 is the thirty-third brightest star in the sky. It is the star in the tail of the bear closest to its body, and thus the star in the handle of the Big Dipper (or Plough) closest to the bowl. It is also a member of the large and diffuse Ursa Major moving group. Historically, the star was frequently used in celestial navigation in the maritime trade, because it is listed as one of the 57 navigational stars. Physical characteristics According to Hipparcos, Epsilon Ursae Majoris is from the Sun. Its spectral type is A1p; the "p" stands for peculiar, as its spectrum is characteristic of an α2 Canum Venaticorum variable. Epsilon Ursae Majoris, as a representative of this type, may harbor two interacting processes. First, the star's strong magnetic field separating different elements in the star's hydrogen 'fuel'. In addition, a rotation axis at an angle to the magnetic axis may be spinning different bands of magnetically sorted elements into the line of sight between Epsilon Ursae Majoris and the Earth. The intervening elements react differently at different frequencies of light as they whip in and out of view, causing Epsilon Ursae Majoris to have very strange spectral lines that fluctuate over a period of 5.1 days. The kB9 suffix to the spectral type indicates that the calcium K line is present and representative of a B9 spectral type even though the rest of the spectrum indicates A1. Epsilon Ursae Majoris's rotational and magnetic poles are at almost 90 degrees to one another. Darker (denser) regions of chromium form a band at right angles to the equator. It has long been suspected that Epsilon Ursae Majoris is a spectroscopic binary, possibly with more than one companion. A more recent study suggests Epsilon Ursae Majoris's 5.1-day variation may be due to a substellar object of about 14.7 Jupiter masses in an eccentric orbit (e=0.5) with an average separation of 0.055 astronomical units. It is now thought that the 5.1-day period is the rotation period of the star, and no companions have been detected using the most modern equipment. Observations of Alioth with the Navy Precision Optical Interferometer also did not detect a companion. Epsilon Ursae Majoris has a relatively weak magnetic field, 15 times weaker than α Canum Venaticorum, but it is still 100 times stronger than that of the Earth. Name and etymology ε Ursae Majoris (Latinised to Epsilon Ursae Majoris) is the star's Bayer designation. The traditional name Alioth comes from the Arabic alyat al-hamal ("the sheep's fat tail"). In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN's first bulletin of July 2016 included a table of the first two batches of names approved by the WGSN; which included Alioth for this star. This star was known to the Hindus as Añgiras, one of the Seven Rishis. In Chinese, (), meaning Northern Dipper, refers to an asterism equivalent to the Big Dipper. Consequently, the Chinese name for Epsilon Ursae Majoris itself is (, ) and (, ). Namesakes The United States Navy's Crater class cargo ship was named after the star. See also List of brightest stars List of nearest bright stars Lists of stars Historical brightest stars References Alpha2 Canum Venaticorum variables Ap stars A-type giants Ursa Major moving group Big Dipper Ursa Major Ursae Majoris, Epsilon 4905 BD+56 1627 Ursae Majoris, 77 112185 062956 Alioth
Alioth
[ "Astronomy" ]
866
[ "Ursa Major", "Constellations" ]
3,107
https://en.wikipedia.org/wiki/Asymptote
In analytic geometry, an asymptote () of a curve is a line such that the distance between the curve and the line approaches zero as one or both of the x or y coordinates tends to infinity. In projective geometry and related contexts, an asymptote of a curve is a line which is tangent to the curve at a point at infinity. The word asymptote is derived from the Greek ἀσύμπτωτος (asumptōtos) which means "not falling together", from ἀ priv. + σύν "together" + πτωτ-ός "fallen". The term was introduced by Apollonius of Perga in his work on conic sections, but in contrast to its modern meaning, he used it to mean any line that does not intersect the given curve. There are three kinds of asymptotes: horizontal, vertical and oblique. For curves given by the graph of a function , horizontal asymptotes are horizontal lines that the graph of the function approaches as x tends to Vertical asymptotes are vertical lines near which the function grows without bound. An oblique asymptote has a slope that is non-zero but finite, such that the graph of the function approaches it as x tends to More generally, one curve is a curvilinear asymptote of another (as opposed to a linear asymptote) if the distance between the two curves tends to zero as they tend to infinity, although the term asymptote by itself is usually reserved for linear asymptotes. Asymptotes convey information about the behavior of curves in the large, and determining the asymptotes of a function is an important step in sketching its graph. The study of asymptotes of functions, construed in a broad sense, forms a part of the subject of asymptotic analysis. Introduction The idea that a curve may come arbitrarily close to a line without actually becoming the same may seem to counter everyday experience. The representations of a line and a curve as marks on a piece of paper or as pixels on a computer screen have a positive width. So if they were to be extended far enough they would seem to merge, at least as far as the eye could discern. But these are physical representations of the corresponding mathematical entities; the line and the curve are idealized concepts whose width is 0 (see Line). Therefore, the understanding of the idea of an asymptote requires an effort of reason rather than experience. Consider the graph of the function shown in this section. The coordinates of the points on the curve are of the form where x is a number other than 0. For example, the graph contains the points (1, 1), (2, 0.5), (5, 0.2), (10, 0.1), ... As the values of become larger and larger, say 100, 1,000, 10,000 ..., putting them far to the right of the illustration, the corresponding values of , .01, .001, .0001, ..., become infinitesimal relative to the scale shown. But no matter how large becomes, its reciprocal is never 0, so the curve never actually touches the x-axis. Similarly, as the values of become smaller and smaller, say .01, .001, .0001, ..., making them infinitesimal relative to the scale shown, the corresponding values of , 100, 1,000, 10,000 ..., become larger and larger. So the curve extends further and further upward as it comes closer and closer to the y-axis. Thus, both the x and y-axis are asymptotes of the curve. These ideas are part of the basis of concept of a limit in mathematics, and this connection is explained more fully below. Asymptotes of functions The asymptotes most commonly encountered in the study of calculus are of curves of the form . These can be computed using limits and classified into horizontal, vertical and oblique asymptotes depending on their orientation. Horizontal asymptotes are horizontal lines that the graph of the function approaches as x tends to +∞ or −∞. As the name indicates they are parallel to the x-axis. Vertical asymptotes are vertical lines (perpendicular to the x-axis) near which the function grows without bound. Oblique asymptotes are diagonal lines such that the difference between the curve and the line approaches 0 as x tends to +∞ or −∞. Vertical asymptotes The line x = a is a vertical asymptote of the graph of the function if at least one of the following statements is true: where is the limit as x approaches the value a from the left (from lesser values), and is the limit as x approaches a from the right. For example, if ƒ(x) = x/(x–1), the numerator approaches 1 and the denominator approaches 0 as x approaches 1. So and the curve has a vertical asymptote x = 1. The function ƒ(x) may or may not be defined at a, and its precise value at the point x = a does not affect the asymptote. For example, for the function has a limit of +∞ as , ƒ(x) has the vertical asymptote , even though ƒ(0) = 5. The graph of this function does intersect the vertical asymptote once, at (0, 5). It is impossible for the graph of a function to intersect a vertical asymptote (or a vertical line in general) in more than one point. Moreover, if a function is continuous at each point where it is defined, it is impossible that its graph does intersect any vertical asymptote. A common example of a vertical asymptote is the case of a rational function at a point x such that the denominator is zero and the numerator is non-zero. If a function has a vertical asymptote, then it isn't necessarily true that the derivative of the function has a vertical asymptote at the same place. An example is at . This function has a vertical asymptote at because and . The derivative of is the function . For the sequence of points for that approaches both from the left and from the right, the values are constantly . Therefore, both one-sided limits of at can be neither nor . Hence doesn't have a vertical asymptote at . Horizontal asymptotes Horizontal asymptotes are horizontal lines that the graph of the function approaches as . The horizontal line y = c is a horizontal asymptote of the function y = ƒ(x) if or . In the first case, ƒ(x) has y = c as asymptote when x tends to , and in the second ƒ(x) has y = c as an asymptote as x tends to . For example, the arctangent function satisfies and So the line is a horizontal asymptote for the arctangent when x tends to , and is a horizontal asymptote for the arctangent when x tends to . Functions may lack horizontal asymptotes on either or both sides, or may have one horizontal asymptote that is the same in both directions. For example, the function has a horizontal asymptote at y = 0 when x tends both to and because, respectively, Other common functions that have one or two horizontal asymptotes include (that has an hyperbola as it graph), the Gaussian function the error function, and the logistic function. Oblique asymptotes When a linear asymptote is not parallel to the x- or y-axis, it is called an oblique asymptote or slant asymptote. A function ƒ(x) is asymptotic to the straight line (m ≠ 0) if In the first case the line is an oblique asymptote of ƒ(x) when x tends to +∞, and in the second case the line is an oblique asymptote of ƒ(x) when x tends to −∞. An example is ƒ(x) = x + 1/x, which has the oblique asymptote y = x (that is m = 1, n = 0) as seen in the limits Elementary methods for identifying asymptotes The asymptotes of many elementary functions can be found without the explicit use of limits (although the derivations of such methods typically use limits). General computation of oblique asymptotes for functions The oblique asymptote, for the function f(x), will be given by the equation y = mx + n. The value for m is computed first and is given by where a is either or depending on the case being studied. It is good practice to treat the two cases separately. If this limit doesn't exist then there is no oblique asymptote in that direction. Having m then the value for n can be computed by where a should be the same value used before. If this limit fails to exist then there is no oblique asymptote in that direction, even should the limit defining m exist. Otherwise is the oblique asymptote of ƒ(x) as x tends to a. For example, the function has and then so that is the asymptote of ƒ(x) when x tends to +∞. The function has and then , which does not exist. So does not have an asymptote when x tends to +∞. Asymptotes for rational functions A rational function has at most one horizontal asymptote or oblique (slant) asymptote, and possibly many vertical asymptotes. The degree of the numerator and degree of the denominator determine whether or not there are any horizontal or oblique asymptotes. The cases are tabulated below, where deg(numerator) is the degree of the numerator, and deg(denominator) is the degree of the denominator. The vertical asymptotes occur only when the denominator is zero (If both the numerator and denominator are zero, the multiplicities of the zero are compared). For example, the following function has vertical asymptotes at x = 0, and x = 1, but not at x = 2. Oblique asymptotes of rational functions When the numerator of a rational function has degree exactly one greater than the denominator, the function has an oblique (slant) asymptote. The asymptote is the polynomial term after dividing the numerator and denominator. This phenomenon occurs because when dividing the fraction, there will be a linear term, and a remainder. For example, consider the function shown to the right. As the value of x increases, f approaches the asymptote y = x. This is because the other term, 1/(x+1), approaches 0. If the degree of the numerator is more than 1 larger than the degree of the denominator, and the denominator does not divide the numerator, there will be a nonzero remainder that goes to zero as x increases, but the quotient will not be linear, and the function does not have an oblique asymptote. Transformations of known functions If a known function has an asymptote (such as y=0 for f(x)=ex), then the translations of it also have an asymptote. If x=a is a vertical asymptote of f(x), then x=a+h is a vertical asymptote of f(x-h) If y=c is a horizontal asymptote of f(x), then y=c+k is a horizontal asymptote of f(x)+k If a known function has an asymptote, then the scaling of the function also have an asymptote. If y=ax+b is an asymptote of f(x), then y=cax+cb is an asymptote of cf(x) For example, f(x)=ex-1+2 has horizontal asymptote y=0+2=2, and no vertical or oblique asymptotes. General definition Let be a parametric plane curve, in coordinates A(t) = (x(t),y(t)). Suppose that the curve tends to infinity, that is: A line ℓ is an asymptote of A if the distance from the point A(t) to ℓ tends to zero as t → b. From the definition, only open curves that have some infinite branch can have an asymptote. No closed curve can have an asymptote. For example, the upper right branch of the curve y = 1/x can be defined parametrically as x = t, y = 1/t (where t > 0). First, x → ∞ as t → ∞ and the distance from the curve to the x-axis is 1/t which approaches 0 as t → ∞. Therefore, the x-axis is an asymptote of the curve. Also, y → ∞ as t → 0 from the right, and the distance between the curve and the y-axis is t which approaches 0 as t → 0. So the y-axis is also an asymptote. A similar argument shows that the lower left branch of the curve also has the same two lines as asymptotes. Although the definition here uses a parameterization of the curve, the notion of asymptote does not depend on the parameterization. In fact, if the equation of the line is then the distance from the point A(t) = (x(t),y(t)) to the line is given by if γ(t) is a change of parameterization then the distance becomes which tends to zero simultaneously as the previous expression. An important case is when the curve is the graph of a real function (a function of one real variable and returning real values). The graph of the function y = ƒ(x) is the set of points of the plane with coordinates (x,ƒ(x)). For this, a parameterization is This parameterization is to be considered over the open intervals (a,b), where a can be −∞ and b can be +∞. An asymptote can be either vertical or non-vertical (oblique or horizontal). In the first case its equation is x = c, for some real number c. The non-vertical case has equation , where m and are real numbers. All three types of asymptotes can be present at the same time in specific examples. Unlike asymptotes for curves that are graphs of functions, a general curve may have more than two non-vertical asymptotes, and may cross its vertical asymptotes more than once. Curvilinear asymptotes Let be a parametric plane curve, in coordinates A(t) = (x(t),y(t)), and B be another (unparameterized) curve. Suppose, as before, that the curve A tends to infinity. The curve B is a curvilinear asymptote of A if the shortest distance from the point A(t) to a point on B tends to zero as t → b. Sometimes B is simply referred to as an asymptote of A, when there is no risk of confusion with linear asymptotes. For example, the function has a curvilinear asymptote , which is known as a parabolic asymptote because it is a parabola rather than a straight line. Asymptotes and curve sketching Asymptotes are used in procedures of curve sketching. An asymptote serves as a guide line to show the behavior of the curve towards infinity. In order to get better approximations of the curve, curvilinear asymptotes have also been used although the term asymptotic curve seems to be preferred. Algebraic curves The asymptotes of an algebraic curve in the affine plane are the lines that are tangent to the projectivized curve through a point at infinity. For example, one may identify the asymptotes to the unit hyperbola in this manner. Asymptotes are often considered only for real curves, although they also make sense when defined in this way for curves over an arbitrary field. A plane curve of degree n intersects its asymptote at most at n−2 other points, by Bézout's theorem, as the intersection at infinity is of multiplicity at least two. For a conic, there are a pair of lines that do not intersect the conic at any complex point: these are the two asymptotes of the conic. A plane algebraic curve is defined by an equation of the form P(x,y) = 0 where P is a polynomial of degree n where Pk is homogeneous of degree k. Vanishing of the linear factors of the highest degree term Pn defines the asymptotes of the curve: setting , if , then the line is an asymptote if and are not both zero. If and , there is no asymptote, but the curve has a branch that looks like a branch of parabola. Such a branch is called a , even when it does not have any parabola that is a curvilinear asymptote. If the curve has a singular point at infinity which may have several asymptotes or parabolic branches. Over the complex numbers, Pn splits into linear factors, each of which defines an asymptote (or several for multiple factors). Over the reals, Pn splits in factors that are linear or quadratic factors. Only the linear factors correspond to infinite (real) branches of the curve, but if a linear factor has multiplicity greater than one, the curve may have several asymptotes or parabolic branches. It may also occur that such a multiple linear factor corresponds to two complex conjugate branches, and does not corresponds to any infinite branch of the real curve. For example, the curve has no real points outside the square , but its highest order term gives the linear factor x with multiplicity 4, leading to the unique asymptote x=0. Asymptotic cone The hyperbola has the two asymptotes The equation for the union of these two lines is Similarly, the hyperboloid is said to have the asymptotic cone The distance between the hyperboloid and cone approaches 0 as the distance from the origin approaches infinity. More generally, consider a surface that has an implicit equation where the are homogeneous polynomials of degree and . Then the equation defines a cone which is centered at the origin. It is called an asymptotic cone, because the distance to the cone of a point of the surface tends to zero when the point on the surface tends to infinity. See also Big O notation References General references Specific references External links Hyperboloid and Asymptotic Cone, string surface model, 1872 from the Science Museum Mathematical analysis Analytic geometry
Asymptote
[ "Mathematics" ]
4,096
[ "Mathematical analysis" ]
3,118
https://en.wikipedia.org/wiki/Arithmetic
Arithmetic is an elementary branch of mathematics that studies numerical operations like addition, subtraction, multiplication, and division. In a wider sense, it also includes exponentiation, extraction of roots, and taking logarithms. Arithmetic systems can be distinguished based on the type of numbers they operate on. Integer arithmetic is about calculations with positive and negative integers. Rational number arithmetic involves operations on fractions of integers. Real number arithmetic is about calculations with real numbers, which include both rational and irrational numbers. Another distinction is based on the numeral system employed to perform calculations. Decimal arithmetic is the most common. It uses the basic numerals from 0 to 9 and their combinations to express numbers. Binary arithmetic, by contrast, is used by most computers and represents numbers as combinations of the basic numerals 0 and 1. Computer arithmetic deals with the specificities of the implementation of binary arithmetic on computers. Some arithmetic systems operate on mathematical objects other than numbers, such as interval arithmetic and matrix arithmetic. Arithmetic operations form the basis of many branches of mathematics, such as algebra, calculus, and statistics. They play a similar role in the sciences, like physics and economics. Arithmetic is present in many aspects of daily life, for example, to calculate change while shopping or to manage personal finances. It is one of the earliest forms of mathematics education that students encounter. Its cognitive and conceptual foundations are studied by psychology and philosophy. The practice of arithmetic is at least thousands and possibly tens of thousands of years old. Ancient civilizations like the Egyptians and the Sumerians invented numeral systems to solve practical arithmetic problems in about 3000 BCE. Starting in the 7th and 6th centuries BCE, the ancient Greeks initiated a more abstract study of numbers and introduced the method of rigorous mathematical proofs. The ancient Indians developed the concept of zero and the decimal system, which Arab mathematicians further refined and spread to the Western world during the medieval period. The first mechanical calculators were invented in the 17th century. The 18th and 19th centuries saw the development of modern number theory and the formulation of axiomatic foundations of arithmetic. In the 20th century, the emergence of electronic calculators and computers revolutionized the accuracy and speed with which arithmetic calculations could be performed. Definition, etymology, and related fields Arithmetic is the fundamental branch of mathematics that studies numbers and their operations. In particular, it deals with numerical calculations using the arithmetic operations of addition, subtraction, multiplication, and division. In a wider sense, it also includes exponentiation, extraction of roots, and logarithm. The term arithmetic has its root in the Latin term which derives from the Ancient Greek words (arithmos), meaning , and (arithmetike tekhne), meaning . There are disagreements about its precise definition. According to a narrow characterization, arithmetic deals only with natural numbers. However, the more common view is to include operations on integers, rational numbers, real numbers, and sometimes also complex numbers in its scope. Some definitions restrict arithmetic to the field of numerical calculations. When understood in a wider sense, it also includes the study of how the concept of numbers developed, the analysis of properties of and relations between numbers, and the examination of the axiomatic structure of arithmetic operations. Arithmetic is closely related to number theory and some authors use the terms as synonyms. However, in a more specific sense, number theory is restricted to the study of integers and focuses on their properties and relationships such as divisibility, factorization, and primality. Traditionally, it is known as higher arithmetic. Numbers Numbers are mathematical objects used to count quantities and measure magnitudes. They are fundamental elements in arithmetic since all arithmetic operations are performed on numbers. There are different kinds of numbers and different numeral systems to represent them. Kinds The main kinds of numbers employed in arithmetic are natural numbers, whole numbers, integers, rational numbers, and real numbers. The natural numbers are whole numbers that start from 1 and go to infinity. They exclude 0 and negative numbers. They are also known as counting numbers and can be expressed as . The symbol of the natural numbers is . The whole numbers are identical to the natural numbers with the only difference being that they include 0. They can be represented as and have the symbol . Some mathematicians do not draw the distinction between the natural and the whole numbers by including 0 in the set of natural numbers. The set of integers encompasses both positive and negative whole numbers. It has the symbol and can be expressed as . Based on how natural and whole numbers are used, they can be distinguished into cardinal and ordinal numbers. Cardinal numbers, like one, two, and three, are numbers that express the quantity of objects. They answer the question "how many?". Ordinal numbers, such as first, second, and third, indicate order or placement in a series. They answer the question "what position?". A number is rational if it can be represented as the ratio of two integers. For instance, the rational number is formed by dividing the integer 1, called the numerator, by the integer 2, called the denominator. Other examples are and . The set of rational numbers includes all integers, which are fractions with a denominator of 1. The symbol of the rational numbers is . Decimal fractions like 0.3 and 25.12 are a special type of rational numbers since their denominator is a power of 10. For instance, 0.3 is equal to , and 25.12 is equal to . Every rational number corresponds to a finite or a repeating decimal. Irrational numbers are numbers that cannot be expressed through the ratio of two integers. They are often required to describe geometric magnitudes. For example, if a right triangle has legs of the length 1 then the length of its hypotenuse is given by the irrational number . is another irrational number and describes the ratio of a circle's circumference to its diameter. The decimal representation of an irrational number is infinite without repeating decimals. The set of rational numbers together with the set of irrational numbers makes up the set of real numbers. The symbol of the real numbers is . Even wider classes of numbers include complex numbers and quaternions. Numeral systems A numeral is a symbol to represent a number and numeral systems are representational frameworks. They usually have a limited amount of basic numerals, which directly refer to certain numbers. The system governs how these basic numerals may be combined to express any number. Numeral systems are either positional or non-positional. All early numeral systems were non-positional. For non-positional numeral systems, the value of a digit does not depend on its position in the numeral. The simplest non-positional system is the unary numeral system. It relies on one symbol for the number 1. All higher numbers are written by repeating this symbol. For example, the number 7 can be represented by repeating the symbol for 1 seven times. This system makes it cumbersome to write large numbers, which is why many non-positional systems include additional symbols to directly represent larger numbers. Variations of the unary numeral systems are employed in tally sticks using dents and in tally marks. Egyptian hieroglyphics had a more complex non-positional numeral system. They have additional symbols for numbers like 10, 100, 1000, and 10,000. These symbols can be combined into a sum to more conveniently express larger numbers. For instance, the numeral for 10,405 uses one time the symbol for 10,000, four times the symbol for 100, and five times the symbol for 1. A similar well-known framework is the Roman numeral system. It has the symbols I, V, X, L, C, D, M as its basic numerals to represent the numbers 1, 5, 10, 50, 100, 500, and 1000. A numeral system is positional if the position of a basic numeral in a compound expression determines its value. Positional numeral systems have a radix that acts as a multiplicand of the different positions. For each subsequent position, the radix is raised to a higher power. In the common decimal system, also called the Hindu–Arabic numeral system, the radix is 10. This means that the first digit is multiplied by , the next digit is multiplied by , and so on. For example, the decimal numeral 532 stands for . Because of the effect of the digits' positions, the numeral 532 differs from the numerals 325 and 253 even though they have the same digits. Another positional numeral system used extensively in computer arithmetic is the binary system, which has a radix of 2. This means that the first digit is multiplied by , the next digit by , and so on. For example, the number 13 is written as 1101 in the binary notation, which stands for . In computing, each digit in the binary notation corresponds to one bit. The earliest positional system was developed by ancient Babylonians and had a radix of 60. Operations Arithmetic operations are ways of combining, transforming, or manipulating numbers. They are functions that have numbers both as input and output. The most important operations in arithmetic are addition, subtraction, multiplication, and division. Further operations include exponentiation, extraction of roots, and logarithm. If these operations are performed on variables rather than numbers, they are sometimes referred to as algebraic operations. Two important concepts in relation to arithmetic operations are identity elements and inverse elements. The identity element or neutral element of an operation does not cause any change if it is applied to another element. For example, the identity element of addition is 0 since any sum of a number and 0 results in the same number. The inverse element is the element that results in the identity element when combined with another element. For instance, the additive inverse of the number 6 is -6 since their sum is 0. There are not only inverse elements but also inverse operations. In an informal sense, one operation is the inverse of another operation if it undoes the first operation. For example, subtraction is the inverse of addition since a number returns to its original value if a second number is first added and subsequently subtracted, as in . Defined more formally, the operation "" is an inverse of the operation "" if it fulfills the following condition: if and only if . Commutativity and associativity are laws governing the order in which some arithmetic operations can be carried out. An operation is commutative if the order of the arguments can be changed without affecting the results. This is the case for addition, for instance, is the same as . Associativity is a rule that affects the order in which a series of operations can be carried out. An operation is associative if, in a series of two operations, it does not matter which operation is carried out first. This is the case for multiplication, for example, since is the same as . Addition and subtraction Addition is an arithmetic operation in which two numbers, called the addends, are combined into a single number, called the sum. The symbol of addition is . Examples are and . The term summation is used if several additions are performed in a row. Counting is a type of repeated addition in which the number 1 is continuously added. Subtraction is the inverse of addition. In it, one number, known as the subtrahend, is taken away from another, known as the minuend. The result of this operation is called the difference. The symbol of subtraction is . Examples are and . Subtraction is often treated as a special case of addition: instead of subtracting a positive number, it is also possible to add a negative number. For instance . This helps to simplify mathematical computations by reducing the number of basic arithmetic operations needed to perform calculations. The additive identity element is 0 and the additive inverse of a number is the negative of that number. For instance, and . Addition is both commutative and associative. Multiplication and division Multiplication is an arithmetic operation in which two numbers, called the multiplier and the multiplicand, are combined into a single number called the product. The symbols of multiplication are , , and *. Examples are and . If the multiplicand is a natural number then multiplication is the same as repeated addition, as in . Division is the inverse of multiplication. In it, one number, known as the dividend, is split into several equal parts by another number, known as the divisor. The result of this operation is called the quotient. The symbols of division are and . Examples are and . Division is often treated as a special case of multiplication: instead of dividing by a number, it is also possible to multiply by its reciprocal. The reciprocal of a number is 1 divided by that number. For instance, . The multiplicative identity element is 1 and the multiplicative inverse of a number is the reciprocal of that number. For example, and . Multiplication is both commutative and associative. Exponentiation and logarithm Exponentiation is an arithmetic operation in which a number, known as the base, is raised to the power of another number, known as the exponent. The result of this operation is called the power. Exponentiation is sometimes expressed using the symbol ^ but the more common way is to write the exponent in superscript right after the base. Examples are and ^. If the exponent is a natural number then exponentiation is the same as repeated multiplication, as in . Roots are a special type of exponentiation using a fractional exponent. For example, the square root of a number is the same as raising the number to the power of and the cube root of a number is the same as raising the number to the power of . Examples are and . Logarithm is the inverse of exponentiation. The logarithm of a number to the base is the exponent to which must be raised to produce . For instance, since , the logarithm base 10 of 1000 is 3. The logarithm of to base is denoted as , or without parentheses, , or even without the explicit base, , when the base can be understood from context. So, the previous example can be written . Exponentiation and logarithm do not have general identity elements and inverse elements like addition and multiplication. The neutral element of exponentiation in relation to the exponent is 1, as in . However, exponentiation does not have a general identity element since 1 is not the neutral element for the base. Exponentiation and logarithm are neither commutative nor associative. Types Different types of arithmetic systems are discussed in the academic literature. They differ from each other based on what type of number they operate on, what numeral system they use to represent them, and whether they operate on mathematical objects other than numbers. Integer arithmetic Integer arithmetic is the branch of arithmetic that deals with the manipulation of positive and negative whole numbers. Simple one-digit operations can be performed by following or memorizing a table that presents the results of all possible combinations, like an addition table or a multiplication table. Other common methods are verbal counting and finger-counting. For operations on numbers with more than one digit, different techniques can be employed to calculate the result by using several one-digit operations in a row. For example, in the method addition with carries, the two numbers are written one above the other. Starting from the rightmost digit, each pair of digits is added together. The rightmost digit of the sum is written below them. If the sum is a two-digit number then the leftmost digit, called the "carry", is added to the next pair of digits to the left. This process is repeated until all digits have been added. Other methods used for integer additions are the number line method, the partial sum method, and the compensation method. A similar technique is utilized for subtraction: it also starts with the rightmost digit and uses a "borrow" or a negative carry for the column on the left if the result of the one-digit subtraction is negative. A basic technique of integer multiplication employs repeated addition. For example, the product of can be calculated as . A common technique for multiplication with larger numbers is called long multiplication. This method starts by writing the multiplier above the multiplicand. The calculation begins by multiplying the multiplier only with the rightmost digit of the multiplicand and writing the result below, starting in the rightmost column. The same is done for each digit of the multiplicand and the result in each case is shifted one position to the left. As a final step, all the individual products are added to arrive at the total product of the two multi-digit numbers. Other techniques used for multiplication are the grid method and the lattice method. Computer science is interested in multiplication algorithms with a low computational complexity to be able to efficiently multiply very large integers, such as the Karatsuba algorithm, the Schönhage–Strassen algorithm, and the Toom–Cook algorithm. A common technique used for division is called long division. Other methods include short division and chunking. Integer arithmetic is not closed under division. This means that when dividing one integer by another integer, the result is not always an integer. For instance, 7 divided by 2 is not a whole number but 3.5. One way to ensure that the result is an integer is to round the result to a whole number. However, this method leads to inaccuracies as the original value is altered. Another method is to perform the division only partially and retain the remainder. For example, 7 divided by 2 is 3 with a remainder of 1. These difficulties are avoided by rational number arithmetic, which allows for the exact representation of fractions. A simple method to calculate exponentiation is by repeated multiplication. For instance, the exponentiation of can be calculated as . A more efficient technique used for large exponents is exponentiation by squaring. It breaks down the calculation into a number of squaring operations. For example, the exponentiation can be written as . By taking advantage of repeated squaring operations, only 7 individual operations are needed rather than the 64 operations required for regular repeated multiplication. Methods to calculate logarithms include the Taylor series and continued fractions. Integer arithmetic is not closed under logarithm and under exponentiation with negative exponents, meaning that the result of these operations is not always an integer. Number theory Number theory studies the structure and properties of integers as well as the relations and laws between them. Some of the main branches of modern number theory include elementary number theory, analytic number theory, algebraic number theory, and geometric number theory. Elementary number theory studies aspects of integers that can be investigated using elementary methods. Its topics include divisibility, factorization, and primality. Analytic number theory, by contrast, relies on techniques from analysis and calculus. It examines problems like how prime numbers are distributed and the claim that every even number is a sum of two prime numbers. Algebraic number theory employs algebraic structures to analyze the properties of and relations between numbers. Examples are the use of fields and rings, as in algebraic number fields like the ring of integers. Geometric number theory uses concepts from geometry to study numbers. For instance, it investigates how lattice points with integer coordinates behave in a plane. Further branches of number theory are probabilistic number theory, which employs methods from probability theory, combinatorial number theory, which relies on the field of combinatorics, computational number theory, which approaches number-theoretic problems with computational methods, and applied number theory, which examines the application of number theory to fields like physics, biology, and cryptography. Influential theorems in number theory include the fundamental theorem of arithmetic, Euclid's theorem, and Fermat's Last Theorem. According to the fundamental theorem of arithmetic, every integer greater than 1 is either a prime number or can be represented as a unique product of prime numbers. For example, the number 18 is not a prime number and can be represented as , all of which are prime numbers. The number 19, by contrast, is a prime number that has no other prime factorization. Euclid's theorem states that there are infinitely many prime numbers. Fermat's Last Theorem is the statement that no positive integer values exist for , , and that solve the equation if is greater than . Rational number arithmetic Rational number arithmetic is the branch of arithmetic that deals with the manipulation of numbers that can be expressed as a ratio of two integers. Most arithmetic operations on rational numbers can be calculated by performing a series of integer arithmetic operations on the numerators and the denominators of the involved numbers. If two rational numbers have the same denominator then they can be added by adding their numerators and keeping the common denominator. For example, . A similar procedure is used for subtraction. If the two numbers do not have the same denominator then they must be transformed to find a common denominator. This can be achieved by scaling the first number with the denominator of the second number while scaling the second number with the denominator of the first number. For instance, . Two rational numbers are multiplied by multiplying their numerators and their denominators respectively, as in . Dividing one rational number by another can be achieved by multiplying the first number with the reciprocal of the second number. This means that the numerator and the denominator of the second number change position. For example, . Unlike integer arithmetic, rational number arithmetic is closed under division as long as the divisor is not 0. Both integer arithmetic and rational number arithmetic are not closed under exponentiation and logarithm. One way to calculate exponentiation with a fractional exponent is to perform two separate calculations: one exponentiation using the numerator of the exponent followed by drawing the nth root of the result based on the denominator of the exponent. For example, . The first operation can be completed using methods like repeated multiplication or exponentiation by squaring. One way to get an approximate result for the second operation is to employ Newton's method, which uses a series of steps to gradually refine an initial guess until it reaches the desired level of accuracy. The Taylor series or the continued fraction method can be utilized to calculate logarithms. The decimal fraction notation is a special way of representing rational numbers whose denominator is a power of 10. For instance, the rational numbers , , and are written as 0.1, 3.71, and 0.0044 in the decimal fraction notation. Modified versions of integer calculation methods like addition with carry and long multiplication can be applied to calculations with decimal fractions. Not all rational numbers have a finite representation in the decimal notation. For example, the rational number corresponds to 0.333... with an infinite number of 3s. The shortened notation for this type of repeating decimal is 0.. Every repeating decimal expresses a rational number. Real number arithmetic Real number arithmetic is the branch of arithmetic that deals with the manipulation of both rational and irrational numbers. Irrational numbers are numbers that cannot be expressed through fractions or repeated decimals, like the root of 2 and . Unlike rational number arithmetic, real number arithmetic is closed under exponentiation as long as it uses a positive number as its base. The same is true for the logarithm of positive real numbers as long as the logarithm base is positive and not 1. Irrational numbers involve an infinite non-repeating series of decimal digits. Because of this, there is often no simple and accurate way to express the results of arithmetic operations like or In cases where absolute precision is not required, the problem of calculating arithmetic operations on real numbers is usually addressed by truncation or rounding. For truncation, a certain number of leftmost digits are kept and remaining digits are discarded or replaced by zeros. For example, the number has an infinite number of digits starting with 3.14159.... If this number is truncated to 4 decimal places, the result is 3.141. Rounding is a similar process in which the last preserved digit is increased by one if the next digit is 5 or greater but remains the same if the next digit is less than 5, so that the rounded number is the best approximation of a given precision for the original number. For instance, if the number is rounded to 4 decimal places, the result is 3.142 because the following digit is a 5, so 3.142 is closer to than 3.141. These methods allow computers to efficiently perform approximate calculations on real numbers. Approximations and errors In science and engineering, numbers represent estimates of physical quantities derived from measurement or modeling. Unlike mathematically exact numbers such as or scientifically relevant numerical data are inherently inexact, involving some measurement uncertainty. One basic way to express the degree of certainty about each number's value and avoid false precision is to round each measurement to a certain number of digits, called significant digits, which are implied to be accurate. For example, a person's height measured with a tape measure might only be precisely known to the nearest centimeter, so should be presented as 1.62 meters rather than 1.6217 meters. If converted to imperial units, this quantity should be rounded to 64 inches or 63.8 inches rather than 63.7795 inches, to clearly convey the precision of the measurement. When a number is written using ordinary decimal notation, leading zeros are not significant, and trailing zeros of numbers not written with a decimal point are implicitly considered to be non-significant. For example, the numbers 0.056 and 1200 each have only 2 significant digits, but the number 40.00 has 4 significant digits. Representing uncertainty using only significant digits is a relatively crude method, with some unintuitive subtleties; explicitly keeping track of an estimate or upper bound of the approximation error is a more sophisticated approach. In the example, the person's height might be represented as meters or . In performing calculations with uncertain quantities, the uncertainty should be propagated to calculated quantities. When adding or subtracting two or more quantities, add the absolute uncertainties of each summand together to obtain the absolute uncertainty of the sum. When multiplying or dividing two or more quantities, add the relative uncertainties of each factor together to obtain the relative uncertainty of the product. When representing uncertainty by significant digits, uncertainty can be coarsely propagated by rounding the result of adding or subtracting two or more quantities to the leftmost last significant decimal place among the summands, and by rounding the result of multiplying or dividing two or more quantities to the least number of significant digits among the factors. (See .) More sophisticated methods of dealing with uncertain values include interval arithmetic and affine arithmetic. Interval arithmetic describes operations on intervals. Intervals can be used to represent a range of values if one does not know the precise magnitude, for example, because of measurement errors. Interval arithmetic includes operations like addition and multiplication on intervals, as in and . It is closely related to affine arithmetic, which aims to give more precise results by performing calculations on affine forms rather than intervals. An affine form is a number together with error terms that describe how the number may deviate from the actual magnitude. The precision of numerical quantities can be expressed uniformly using normalized scientific notation, which is also convenient for concisely representing numbers which are much larger or smaller than 1. Using scientific notation, a number is decomposed into the product of a number between 1 and 10, called the significand, and 10 raised to some integer power, called the exponent. The significand consists of the significant digits of the number, and is written as a leading digit 1–9 followed by a decimal point and a sequence of digits 0–9. For example, the normalized scientific notation of the number 8276000 is with significand 8.276 and exponent 6, and the normalized scientific notation of the number 0.00735 is with significand 7.35 and exponent −3. Unlike ordinary decimal notation, where trailing zeros of large numbers are implicitly considered to be non-significant, in scientific notation every digit in the significand is considered significant, and adding trailing zeros indicates higher precision. For example, while the number 1200 implicitly has only 2 significant digits, the number explicitly has 3. A common method employed by computers to approximate real number arithmetic is called floating-point arithmetic. It represents real numbers similar to the scientific notation through three numbers: a significand, a base, and an exponent. The precision of the significand is limited by the number of bits allocated to represent it. If an arithmetic operation results in a number that requires more bits than are available, the computer rounds the result to the closest representable number. This leads to rounding errors. A consequence of this behavior is that certain laws of arithmetic are violated by floating-point arithmetic. For example, floating-point addition is not associative since the rounding errors introduced can depend on the order of the additions. This means that the result of is sometimes different from the result of The most common technical standard used for floating-point arithmetic is called IEEE 754. Among other things, it determines how numbers are represented, how arithmetic operations and rounding are performed, and how errors and exceptions are handled. In cases where computation speed is not a limiting factor, it is possible to use arbitrary-precision arithmetic, for which the precision of calculations is only restricted by the computer's memory. Tool use Forms of arithmetic can also be distinguished by the tools employed to perform calculations and include many approaches besides the regular use of pen and paper. Mental arithmetic relies exclusively on the mind without external tools. Instead, it utilizes visualization, memorization, and certain calculation techniques to solve arithmetic problems. One such technique is the compensation method, which consists in altering the numbers to make the calculation easier and then adjusting the result afterward. For example, instead of calculating , one calculates which is easier because it uses a round number. In the next step, one adds to the result to compensate for the earlier adjustment. Mental arithmetic is often taught in primary education to train the numerical abilities of the students. The human body can also be employed as an arithmetic tool. The use of hands in finger counting is often introduced to young children to teach them numbers and simple calculations. In its most basic form, the number of extended fingers corresponds to the represented quantity and arithmetic operations like addition and subtraction are performed by extending or retracting fingers. This system is limited to small numbers compared to more advanced systems which employ different approaches to represent larger quantities. The human voice is used as an arithmetic aid in verbal counting. Tally marks are a simple system based on external tools other than the body. This system relies on mark making, such as strokes drawn on a surface or notches carved into a wooden stick, to keep track of quantities. Some forms of tally marks arrange the strokes in groups of five to make them easier to read. The abacus is a more advanced tool to represent numbers and perform calculations. An abacus usually consists of a series of rods, each holding several beads. Each bead represents a quantity, which is counted if the bead is moved from one end of a rod to the other. Calculations happen by manipulating the positions of beads until the final bead pattern reveals the result. Related aids include counting boards, which use tokens whose value depends on the area on the board in which they are placed, and counting rods, which are arranged in horizontal and vertical patterns to represent different numbers. Sectors and slide rules are more refined calculating instruments that rely on geometric relationships between different scales to perform both basic and advanced arithmetic operations. Printed tables were particularly relevant as an aid to look up the results of operations like logarithm and trigonometric functions. Mechanical calculators automate manual calculation processes. They present the user with some form of input device to enter numbers by turning dials or pressing keys. They include an internal mechanism usually consisting of gears, levers, and wheels to perform calculations and display the results. For electronic calculators and computers, this procedure is further refined by replacing the mechanical components with electronic circuits like microprocessors that combine and transform electric signals to perform calculations. Others There are many other types of arithmetic. Modular arithmetic operates on a finite set of numbers. If an operation would result in a number outside this finite set then the number is adjusted back into the set, similar to how the hands of clocks start at the beginning again after having completed one cycle. The number at which this adjustment happens is called the modulus. For example, a regular clock has a modulus of 12. In the case of adding 4 to 9, this means that the result is not 13 but 1. The same principle applies also to other operations, such as subtraction, multiplication, and division. Some forms of arithmetic deal with operations performed on mathematical objects other than numbers. Interval arithmetic describes operations on intervals. Vector arithmetic and matrix arithmetic describe arithmetic operations on vectors and matrices, like vector addition and matrix multiplication. Arithmetic systems can be classified based on the numeral system they rely on. For instance, decimal arithmetic describes arithmetic operations in the decimal system. Other examples are binary arithmetic, octal arithmetic, and hexadecimal arithmetic. Compound unit arithmetic describes arithmetic operations performed on magnitudes with compound units. It involves additional operations to govern the transformation between single unit and compound unit quantities. For example, the operation of reduction is used to transform the compound quantity 1 h 90 min into the single unit quantity 150 min. Non-Diophantine arithmetics are arithmetic systems that violate traditional arithmetic intuitions and include equations like and . They can be employed to represent some real-world situations in modern physics and everyday life. For instance, the equation can be used to describe the observation that if one raindrop is added to another raindrop then they do not remain two separate entities but become one. Axiomatic foundations Axiomatic foundations of arithmetic try to provide a small set of laws, called axioms, from which all fundamental properties of and operations on numbers can be derived. They constitute logically consistent and systematic frameworks that can be used to formulate mathematical proofs in a rigorous manner. Two well-known approaches are the Dedekind–Peano axioms and set-theoretic constructions. The Dedekind–Peano axioms provide an axiomatization of the arithmetic of natural numbers. Their basic principles were first formulated by Richard Dedekind and later refined by Giuseppe Peano. They rely only on a small number of primitive mathematical concepts, such as 0, natural number, and successor. The Peano axioms determine how these concepts are related to each other. All other arithmetic concepts can then be defined in terms of these primitive concepts. 0 is a natural number. For every natural number, there is a successor, which is also a natural number. The successors of two different natural numbers are never identical. 0 is not the successor of a natural number. If a set contains 0 and every successor then it contains every natural number. Numbers greater than 0 are expressed by repeated application of the successor function . For example, is and is . Arithmetic operations can be defined as mechanisms that affect how the successor function is applied. For instance, to add to any number is the same as applying the successor function two times to this number. Various axiomatizations of arithmetic rely on set theory. They cover natural numbers but can also be extended to integers, rational numbers, and real numbers. Each natural number is represented by a unique set. 0 is usually defined as the empty set . Each subsequent number can be defined as the union of the previous number with the set containing the previous number. For example, , , and . Integers can be defined as ordered pairs of natural numbers where the second number is subtracted from the first one. For instance, the pair (9, 0) represents the number 9 while the pair (0, 9) represents the number -9. Rational numbers are defined as pairs of integers where the first number represents the numerator and the second number represents the denominator. For example, the pair (3, 7) represents the rational number . One way to construct the real numbers relies on the concept of Dedekind cuts. According to this approach, each real number is represented by a partition of all rational numbers into two sets, one for all numbers below the represented real number and the other for the rest. Arithmetic operations are defined as functions that perform various set-theoretic transformations on the sets representing the input numbers to arrive at the set representing the result. History The earliest forms of arithmetic are sometimes traced back to counting and tally marks used to keep track of quantities. Some historians suggest that the Lebombo bone (dated about 43,000 years ago) and the Ishango bone (dated about 22,000 to 30,000 years ago) are the oldest arithmetic artifacts but this interpretation is disputed. However, a basic sense of numbers may predate these findings and might even have existed before the development of language. It was not until the emergence of ancient civilizations that a more complex and structured approach to arithmetic began to evolve, starting around 3000 BCE. This became necessary because of the increased need to keep track of stored items, manage land ownership, and arrange exchanges. All the major ancient civilizations developed non-positional numeral systems to facilitate the representation of numbers. They also had symbols for operations like addition and subtraction and were aware of fractions. Examples are Egyptian hieroglyphics as well as the numeral systems invented in Sumeria, China, and India. The first positional numeral system was developed by the Babylonians starting around 1800 BCE. This was a significant improvement over earlier numeral systems since it made the representation of large numbers and calculations on them more efficient. Abacuses have been utilized as hand-operated calculating tools since ancient times as efficient means for performing complex calculations. Early civilizations primarily used numbers for concrete practical purposes, like commercial activities and tax records, but lacked an abstract concept of number itself. This changed with the ancient Greek mathematicians, who began to explore the abstract nature of numbers rather than studying how they are applied to specific problems. Another novel feature was their use of proofs to establish mathematical truths and validate theories. A further contribution was their distinction of various classes of numbers, such as even numbers, odd numbers, and prime numbers. This included the discovery that numbers for certain geometrical lengths are irrational and therefore cannot be expressed as a fraction. The works of Thales of Miletus and Pythagoras in the 7th and 6th centuries BCE are often regarded as the inception of Greek mathematics. Diophantus was an influential figure in Greek arithmetic in the 3rd century BCE because of his numerous contributions to number theory and his exploration of the application of arithmetic operations to algebraic equations. The ancient Indians were the first to develop the concept of zero as a number to be used in calculations. The exact rules of its operation were written down by Brahmagupta in around 628 CE. The concept of zero or none existed long before, but it was not considered an object of arithmetic operations. Brahmagupta further provided a detailed discussion of calculations with negative numbers and their application to problems like credit and debt. The concept of negative numbers itself is significantly older and was first explored in Chinese mathematics in the first millennium BCE. Indian mathematicians also developed the positional decimal system used today, in particular the concept of a zero digit instead of empty or missing positions. For example, a detailed treatment of its operations was provided by Aryabhata around the turn of the 6th century CE. The Indian decimal system was further refined and expanded to non-integers during the Islamic Golden Age by Middle Eastern mathematicians such as Al-Khwarizmi. His work was influential in introducing the decimal numeral system to the Western world, which at that time relied on the Roman numeral system. There, it was popularized by mathematicians like Leonardo Fibonacci, who lived in the 12th and 13th centuries and also developed the Fibonacci sequence. During the Middle Ages and Renaissance, many popular textbooks were published to cover the practical calculations for commerce. The use of abacuses also became widespread in this period. In the 16th century, the mathematician Gerolamo Cardano conceived the concept of complex numbers as a way to solve cubic equations. The first mechanical calculators were developed in the 17th century and greatly facilitated complex mathematical calculations, such as Blaise Pascal's calculator and Gottfried Wilhelm Leibniz's stepped reckoner. The 17th century also saw the discovery of the logarithm by John Napier. In the 18th and 19th centuries, mathematicians such as Leonhard Euler and Carl Friedrich Gauss laid the foundations of modern number theory. Another development in this period concerned work on the formalization and foundations of arithmetic, such as Georg Cantor's set theory and the Dedekind–Peano axioms used as an axiomatization of natural-number arithmetic. Computers and electronic calculators were first developed in the 20th century. Their widespread use revolutionized both the accuracy and speed with which even complex arithmetic computations can be calculated. In various fields Education Arithmetic education forms part of primary education. It is one of the first forms of mathematics education that children encounter. Elementary arithmetic aims to give students a basic sense of numbers and to familiarize them with fundamental numerical operations like addition, subtraction, multiplication, and division. It is usually introduced in relation to concrete scenarios, like counting beads, dividing the class into groups of children of the same size, and calculating change when buying items. Common tools in early arithmetic education are number lines, addition and multiplication tables, counting blocks, and abacuses. Later stages focus on a more abstract understanding and introduce the students to different types of numbers, such as negative numbers, fractions, real numbers, and complex numbers. They further cover more advanced numerical operations, like exponentiation, extraction of roots, and logarithm. They also show how arithmetic operations are employed in other branches of mathematics, such as their application to describe geometrical shapes and the use of variables in algebra. Another aspect is to teach the students the use of algorithms and calculators to solve complex arithmetic problems. Psychology The psychology of arithmetic is interested in how humans and animals learn about numbers, represent them, and use them for calculations. It examines how mathematical problems are understood and solved and how arithmetic abilities are related to perception, memory, judgment, and decision making. For example, it investigates how collections of concrete items are first encountered in perception and subsequently associated with numbers. A further field of inquiry concerns the relation between numerical calculations and the use of language to form representations. Psychology also explores the biological origin of arithmetic as an inborn ability. This concerns pre-verbal and pre-symbolic cognitive processes implementing arithmetic-like operations required to successfully represent the world and perform tasks like spatial navigation. One of the concepts studied by psychology is numeracy, which is the capability to comprehend numerical concepts, apply them to concrete situations, and reason with them. It includes a fundamental number sense as well as being able to estimate and compare quantities. It further encompasses the abilities to symbolically represent numbers in numbering systems, interpret numerical data, and evaluate arithmetic calculations. Numeracy is a key skill in many academic fields. A lack of numeracy can inhibit academic success and lead to bad economic decisions in everyday life, for example, by misunderstanding mortgage plans and insurance policies. Philosophy The philosophy of arithmetic studies the fundamental concepts and principles underlying numbers and arithmetic operations. It explores the nature and ontological status of numbers, the relation of arithmetic to language and logic, and how it is possible to acquire arithmetic knowledge. According to Platonism, numbers have mind-independent existence: they exist as abstract objects outside spacetime and without causal powers. This view is rejected by intuitionists, who claim that mathematical objects are mental constructions. Further theories are logicism, which holds that mathematical truths are reducible to logical truths, and formalism, which states that mathematical principles are rules of how symbols are manipulated without claiming that they correspond to entities outside the rule-governed activity. The traditionally dominant view in the epistemology of arithmetic is that arithmetic truths are knowable a priori. This means that they can be known by thinking alone without the need to rely on sensory experience. Some proponents of this view state that arithmetic knowledge is innate while others claim that there is some form of rational intuition through which mathematical truths can be apprehended. A more recent alternative view was suggested by naturalist philosophers like Willard Van Orman Quine, who argue that mathematical principles are high-level generalizations that are ultimately grounded in the sensory world as described by the empirical sciences. Others Arithmetic is relevant to many fields. In daily life, it is required to calculate change when shopping, manage personal finances, and adjust a cooking recipe for a different number of servings. Businesses use arithmetic to calculate profits and losses and analyze market trends. In the field of engineering, it is used to measure quantities, calculate loads and forces, and design structures. Cryptography relies on arithmetic operations to protect sensitive information by encrypting data and messages. Arithmetic is intimately connected to many branches of mathematics that depend on numerical operations. Algebra relies on arithmetic principles to solve equations using variables. These principles also play a key role in calculus in its attempt to determine rates of change and areas under curves. Geometry uses arithmetic operations to measure the properties of shapes while statistics utilizes them to analyze numerical data. Due to the relevance of arithmetic operations throughout mathematics, the influence of arithmetic extends to most sciences such as physics, computer science, and economics. These operations are used in calculations, problem-solving, data analysis, and algorithms, making them integral to scientific research, technological development, and economic modeling. See also Algorism Expression (mathematics) Finite field arithmetic Outline of arithmetic Plant arithmetic References Notes Citations Sources External links
Arithmetic
[ "Mathematics" ]
9,524
[ "Arithmetic", "Number theory" ]
3,124
https://en.wikipedia.org/wiki/Afterglow
An afterglow in meteorology consists of several atmospheric optical phenomena, with a general definition as a broad arch of whitish or pinkish sunlight in the twilight sky, consisting of the bright segment and the purple light. Purple light mainly occurs when the Sun is 2–6° below the horizon, from civil to nautical twilight, while the bright segment lasts until the end of the nautical twilight. Afterglow is often in cases of volcanic eruptions discussed, while its purple light is discussed as a different particular volcanic purple light. Specifically in volcanic occurrences it is light scattered by fine particulates, like dust, suspended in the atmosphere. In the case of alpenglow, which is similar to the Belt of Venus, afterglow is used in general for the golden-red glowing light from the sunset and sunrise reflected in the sky, and in particularly for its last stage, when the purple light is reflected. The opposite of an afterglow is a foreglow, which occurs before sunrise. Sunlight reaches Earth around civil twilight during golden hour intensely in its low-energy and low-frequency red component. During this part of civil twilight after sunset and before sundawn the red sunlight remains visible by scattering through particles in the air. Backscattering, possibly after being reflected off clouds or high snowfields in mountain regions, furthermore creates a reddish to pinkish light. The high-energy and high-frequency components of light towards blue are scattered out broadly, producing the broader blue light of nautical twilight before or after the reddish light of civil twilight, while in combination with the reddish light producing the purple light. This period of blue dominating is referred to as the blue hour and is, like the golden hour, widely treasured by photographers and painters. After the 1883 eruption of the volcano Krakatoa, a remarkable series of red sunsets appeared worldwide. An enormous amount of exceedingly fine dust were blown to a great height by the volcano's explosion, and then globally diffused by the high atmospheric winds. Edvard Munch's painting The Scream possibly depicts an afterglow during this period. See also Airglow Belt of Venus Earth's shadow Gegenschein Red sky at morning Sunset References External links Atmospheric optical phenomena es:Arrebol fi:Purppuravalo
Afterglow
[ "Physics" ]
466
[ "Optical phenomena", "Physical phenomena", "Atmospheric optical phenomena", "Earth phenomena" ]
3,135
https://en.wikipedia.org/wiki/Arteriovenous%20malformation
An arteriovenous malformation (AVM) is an abnormal connection between arteries and veins, bypassing the capillary system. Usually congenital, this vascular anomaly is widely known because of its occurrence in the central nervous system (usually as a cerebral AVM), but can appear anywhere in the body. The symptoms of AVMs can range from none at all to intense pain or bleeding, and they can lead to other serious medical problems. Signs and symptoms Symptoms of AVMs vary according to their location. Most neurological AVMs produce few to no symptoms. Often the malformation is discovered as part of an autopsy or during treatment of an unrelated disorder (an "incidental finding"); in rare cases, its expansion or a micro-bleed from an AVM in the brain can cause epilepsy, neurological deficit, or pain. The most general symptoms of a cerebral AVM include headaches and epileptic seizures, with more specific symptoms that normally depend on its location and the individual, including: Difficulties with movement coordination, including muscle weakness and even paralysis; Vertigo (dizziness); Difficulties of speech (dysarthria) and communication, such as aphasia; Difficulties with everyday activities, such as apraxia; Abnormal sensations (numbness, tingling, or spontaneous pain); Memory and thought-related problems, such as confusion, dementia, or hallucinations. Cerebral AVMs may present themselves in a number of different ways: Bleeding (45% of cases) "parkinsonism" 4 symptoms in Parkinson's disease. Acute onset of severe headache. May be described as the worst headache of the patient's life. Depending on the location of bleeding, may be associated with new fixed neurologic deficit. In unruptured brain AVMs, the risk of spontaneous bleeding may be as low as 1% per year. After a first rupture, the annual bleeding risk may increase to more than 5%. Seizure or brain seizure (46%). Depending on the place of the AVM, it can contribute to loss of vision. Headache (34%) Progressive neurologic deficit (21%) May be caused by mass effect or venous dilatations. Presence and nature of the deficit depend on location of lesion and the draining veins. Pediatric patients Heart failure Macrocephaly Prominent scalp veins Pulmonary arteriovenous malformations Pulmonary arteriovenous malformations are abnormal communications between the veins and arteries of the pulmonary circulation, leading to a right-to-left blood shunt. They have no symptoms in up to 29% of all cases, however they can give rise to serious complications including hemorrhage, and infection. They are most commonly associated with hereditary hemorrhagic telangiectasia. Genetics AVMs are usually congenital and are part of the RASopathy family of developmental syndromes. The understanding of the anomaly's genetic transmission patterns are incomplete, but there are known genetic mutations (for instance in the epithelial line, tumor suppressor PTEN gene) which can lead to an increased occurrence throughout the body. The anomaly can occur due to autosomal dominant diseases, such as hereditary hemorrhagic telangiectasia. Pathophysiology In the circulatory system, arteries carry blood away from the heart to the lungs and the rest of the body, where the blood normally passes through capillaries—where oxygen is released and waste products like carbon dioxide () absorbed—before veins return blood to the heart. An AVM interferes with this process by forming a direct connection of the arteries and veins, bypassing the capillary bed. AVMs can cause intense pain and lead to serious medical problems. Although AVMs are often associated with the brain and spinal cord, they can develop in other parts of the body. As an AVM lacks the dampening effect of capillaries on the blood flow, the AVM can get progressively larger over time as the amount of blood flowing through it increases, forcing the heart to work harder to keep up with the extra blood flow. It also causes the surrounding area to be deprived of the functions of the capillaries. The resulting tangle of blood vessels, often called a nidus (Latin for 'nest'), has no capillaries. It can be extremely fragile and prone to bleeding because of the abnormally direct connections between high-pressure arteries and low-pressure veins. One indicator is a pulsing 'whoosh' sound caused by rapid blood flow through arteries and veins, which has been given the term bruit (French for 'noise'). If the AVM is severe, this may produce an audible symptom which can interfere with hearing and sleep as well as cause psychological distress. Diagnosis AVMs are diagnosed primarily by the following imaging methods: Computed tomography (CT) scan is a noninvasive X-ray to view the anatomical structures within the brain to detect blood in or around the brain. A newer technology called CT angiography involves the injection of contrast into the blood stream to view the arteries of the brain. This type of test provides the best pictures of blood vessels through angiography and soft tissues through CT. Magnetic resonance imaging (MRI) scan is a noninvasive test, which uses a magnetic field and radio-frequency waves to give a detailed view of the soft tissues of the brain. Magnetic resonance angiography (MRA) – scans created using magnetic resonance imaging to specifically image the blood vessels and structures of the brain. A magnetic resonance angiogram can be an invasive procedure, involving the introduction of contrast dyes (e.g., gadolinium MR contrast agents) into the vasculature (circulatory system) of a patient using a catheter inserted into an artery and passed through the blood vessels to the brain. Once the catheter is in place, the contrast dye is injected into the bloodstream and the MR images are taken. Additionally or alternatively, flow-dependent or other contrast-free magnetic resonance imaging techniques can be used to determine the location and other properties of the vasculature. AVMs can occur in various parts of the body: brain (cerebral AV malformation) spleen lung kidney spinal cord liver intercostal space iris spermatic cord extremities – arm, shoulder, etc. AVMs may occur in isolation or as a part of another disease (for example, Sturge-Weber syndrome or hereditary hemorrhagic telangiectasia). AVMs have been shown to be associated with aortic stenosis. Bleeding from an AVM can be relatively mild or devastating. It can cause severe and less often fatal strokes. Treatment Treatment for AVMs in the brain can be symptomatic, and patients should be followed by a neurologist for any seizures, headaches, or focal neurologic deficits. AVM-specific treatment may also involve endovascular embolization, neurosurgery or radiosurgery. Embolization, that is, cutting off the blood supply to the AVM with coils, particles, acrylates, or polymers introduced by a radiographically guided catheter, may be used in addition to neurosurgery or radiosurgery, but is rarely successful in isolation except in smaller AVMs. A gamma knife may also be used. If a cerebral AVM is detected before a stroke occurs, usually the arteries feeding blood into the nidus can be closed off to avert the danger. Interventional therapy may be relatively risky in the short term. Treatment of lung AVMs is typically performed with endovascular embolization alone, which is considered the standard of care. Epidemiology The estimated detection rate of AVM in the US general population is 1.4/100,000 per year. This is approximately one-fifth to one-seventh the incidence of intracranial aneurysms. An estimated 300,000 Americans have AVMs, of whom 12% (approximately 36,000) will exhibit symptoms of greatly varying severity. History Hubert von Luschka (1820–1875) and Rudolf Virchow (1821–1902) first described arteriovenous malformations in the mid-1800s. Herbert Olivecrona (1891–1980) performed the first surgical excision of an intracranial AVM in 1932. Society and culture Notable cases Actor Ricardo Montalbán was born with spinal AVM. During the filming of the 1951 film Across the Wide Missouri, Montalbán was thrown from his horse, knocked unconscious, and trampled by another horse which aggravated his AVM and resulted in a painful back injury that never healed. The pain increased as he aged, and in 1993, Montalbán underwent hours of spinal surgery which left him paralyzed below the waist and using a wheelchair. Composer and lyricist William Finn was diagnosed with AVM and underwent gamma knife surgery in September 1992, soon after he won the 1992 Tony Award for best musical, awarded to "Falsettos". Finn wrote the 1998 Off-Broadway musical A New Brain about the experience. Phoenix Suns point guard AJ Price nearly died from AVM in 2004 while a student at the University of Connecticut. On December 13, 2006, Senator Tim Johnson of South Dakota was diagnosed with AVM and treated at George Washington University Hospital. Actor/comedian T. J. Miller was diagnosed with AVM in 2010; Miller had a seizure and was unable to sleep for a period. He successfully underwent surgery that had a mortality rate of 10%. On August 3, 2011, Mike Patterson of the Philadelphia Eagles collapsed on the field and had a seizure during a practice, leading to him being diagnosed with AVM. Former Florida Gators and Oakland Raiders linebacker Neiron Ball was diagnosed with AVM in 2011 while playing for Florida, but recovered and was cleared to play. On September 16, 2018, Ball was placed in a medically induced coma due to complications of the disease, which lasted until his death on September 10, 2019. Indonesian actress died from complications of AVM on November 29, 2013. Jazz guitarist Pat Martino experienced an AVM and subsequently developed amnesia and manic depression. He eventually re-learned to play the guitar by listening to his own recordings from before the aneurysm. YouTube vlogger Nikki Lilly (Nikki Christou), winner of the 2016 season of Junior Bake Off was born with AVM, which has resulted in some facial disfigurement. Country music singer Drake White was diagnosed with AVM in January 2019, and is undergoing treatment. Cultural depictions In the HBO series Six Feet Under (2001), main character Nate Fisher discovers he has an AVM after being in a car accident and getting a precautionary cat scan at the hospital during Season 1. His AVM becomes a key focus during Season 2 and again in Season 5. In season 1 episode 9 of House (2004), titled "DNR", a jazz musician has an AVM and is misdiagnosed with ALS. Two season three episodes also involve AVM - "Top Secret" (episode 16), in which a veteran who believes himself to be suffering from Gulf War syndrome is found to have spinal and pulmonary AVM from hereditary hemorrhagic telangiectasia; and "Resignation" (episode 22), where the patient developed AVM in her intestines after drinking pipe cleaner fluid in a suicide attempt. In the 2005 Lifetime film Dawn Anna, the titular character learns she has AVM, and undergoes a serious operation and subsequent rehabilitation, which she recovers from. See also Foix–Alajouanine syndrome Haemangioma Klippel–Trénaunay syndrome Parkes Weber syndrome References Angiogenesis Congenital vascular defects Gross pathology RASopathies Vascular anomalies
Arteriovenous malformation
[ "Biology" ]
2,424
[ "Angiogenesis" ]
3,170
https://en.wikipedia.org/wiki/Arithmetic%20function
In number theory, an arithmetic, arithmetical, or number-theoretic function is generally any function f(n) whose domain is the positive integers and whose range is a subset of the complex numbers. Hardy & Wright include in their definition the requirement that an arithmetical function "expresses some arithmetical property of n". There is a larger class of number-theoretic functions that do not fit this definition, for example, the prime-counting functions. This article provides links to functions of both classes. An example of an arithmetic function is the divisor function whose value at a positive integer n is equal to the number of divisors of n. Arithmetic functions are often extremely irregular (see table), but some of them have series expansions in terms of Ramanujan's sum. Multiplicative and additive functions An arithmetic function a is completely additive if a(mn) = a(m) + a(n) for all natural numbers m and n; completely multiplicative if a(mn) = a(m)a(n) for all natural numbers m and n; Two whole numbers m and n are called coprime if their greatest common divisor is 1, that is, if there is no prime number that divides both of them. Then an arithmetic function a is additive if a(mn) = a(m) + a(n) for all coprime natural numbers m and n; multiplicative if a(mn) = a(m)a(n) for all coprime natural numbers m and n. Notation In this article, and mean that the sum or product is over all prime numbers: and Similarly, and mean that the sum or product is over all prime powers with strictly positive exponent (so is not included): The notations and mean that the sum or product is over all positive divisors of n, including 1 and n. For example, if , then The notations can be combined: and mean that the sum or product is over all prime divisors of n. For example, if n = 18, then and similarly and mean that the sum or product is over all prime powers dividing n. For example, if n = 24, then Ω(n), ω(n), νp(n) – prime power decomposition The fundamental theorem of arithmetic states that any positive integer n can be represented uniquely as a product of powers of primes: where p1 < p2 < ... < pk are primes and the aj are positive integers. (1 is given by the empty product.) It is often convenient to write this as an infinite product over all the primes, where all but a finite number have a zero exponent. Define the p-adic valuation νp(n) to be the exponent of the highest power of the prime p that divides n. That is, if p is one of the pi then νp(n) = ai, otherwise it is zero. Then In terms of the above the prime omega functions ω and Ω are defined by To avoid repetition, formulas for the functions listed in this article are, whenever possible, given in terms of n and the corresponding pi, ai, ω, and Ω. Multiplicative functions σk(n), τ(n), d(n) – divisor sums σk(n) is the sum of the kth powers of the positive divisors of n, including 1 and n, where k is a complex number. σ1(n), the sum of the (positive) divisors of n, is usually denoted by σ(n). Since a positive number to the zero power is one, σ0(n) is therefore the number of (positive) divisors of n; it is usually denoted by d(n) or τ(n) (for the German Teiler = divisors). Setting k = 0 in the second product gives φ(n) – Euler totient function φ(n), the Euler totient function, is the number of positive integers not greater than n that are coprime to n. Jk(n) – Jordan totient function Jk(n), the Jordan totient function, is the number of k-tuples of positive integers all less than or equal to n that form a coprime (k + 1)-tuple together with n. It is a generalization of Euler's totient, . μ(n) – Möbius function μ(n), the Möbius function, is important because of the Möbius inversion formula. See , below. This implies that μ(1) = 1. (Because Ω(1) = ω(1) = 0.) τ(n) – Ramanujan tau function τ(n), the Ramanujan tau function, is defined by its generating function identity: Although it is hard to say exactly what "arithmetical property of n" it "expresses", (τ(n) is (2π)−12 times the nth Fourier coefficient in the q-expansion of the modular discriminant function) it is included among the arithmetical functions because it is multiplicative and it occurs in identities involving certain σk(n) and rk(n) functions (because these are also coefficients in the expansion of modular forms). cq(n) – Ramanujan's sum cq(n), Ramanujan's sum, is the sum of the nth powers of the primitive qth roots of unity: Even though it is defined as a sum of complex numbers (irrational for most values of q), it is an integer. For a fixed value of n it is multiplicative in q: If q and r are coprime, then ψ(n) – Dedekind psi function The Dedekind psi function, used in the theory of modular functions, is defined by the formula Completely multiplicative functions λ(n) – Liouville function λ(n), the Liouville function, is defined by χ(n) – characters All Dirichlet characters χ(n) are completely multiplicative. Two characters have special notations: The principal character (mod n) is denoted by χ0(a) (or χ1(a)). It is defined as The quadratic character (mod n) is denoted by the Jacobi symbol for odd n (it is not defined for even n): In this formula is the Legendre symbol, defined for all integers a and all odd primes p by Following the normal convention for the empty product, Additive functions ω(n) – distinct prime divisors ω(n), defined above as the number of distinct primes dividing n, is additive (see Prime omega function). Completely additive functions Ω(n) – prime divisors Ω(n), defined above as the number of prime factors of n counted with multiplicities, is completely additive (see Prime omega function). νp(n) – p-adic valuation of an integer n For a fixed prime p, νp(n), defined above as the exponent of the largest power of p dividing n, is completely additive. Logarithmic derivative , where is the arithmetic derivative. Neither multiplicative nor additive π(x), Π(x), ϑ(x), ψ(x) – prime-counting functions These important functions (which are not arithmetic functions) are defined for non-negative real arguments, and are used in the various statements and proofs of the prime number theorem. They are summation functions (see the main section just below) of arithmetic functions which are neither multiplicative nor additive. π(x), the prime-counting function, is the number of primes not exceeding x. It is the summation function of the characteristic function of the prime numbers. A related function counts prime powers with weight 1 for primes, 1/2 for their squares, 1/3 for cubes, etc. It is the summation function of the arithmetic function which takes the value 1/k on integers which are the kth power of some prime number, and the value 0 on other integers. ϑ(x) and ψ(x), the Chebyshev functions, are defined as sums of the natural logarithms of the primes not exceeding x. The second Chebyshev function ψ(x) is the summation function of the von Mangoldt function just below. Λ(n) – von Mangoldt function Λ(n), the von Mangoldt function, is 0 unless the argument n is a prime power , in which case it is the natural logarithm of the prime p: p(n) – partition function p(n), the partition function, is the number of ways of representing n as a sum of positive integers, where two representations with the same summands in a different order are not counted as being different: λ(n) – Carmichael function λ(n), the Carmichael function, is the smallest positive number such that   for all a coprime to n. Equivalently, it is the least common multiple of the orders of the elements of the multiplicative group of integers modulo n. For powers of odd primes and for 2 and 4, λ(n) is equal to the Euler totient function of n; for powers of 2 greater than 4 it is equal to one half of the Euler totient function of n: and for general n it is the least common multiple of λ of each of the prime power factors of n: h(n) – class number h(n), the class number function, is the order of the ideal class group of an algebraic extension of the rationals with discriminant n. The notation is ambiguous, as there are in general many extensions with the same discriminant. See quadratic field and cyclotomic field for classical examples. rk(n) – sum of k squares rk(n) is the number of ways n can be represented as the sum of k squares, where representations that differ only in the order of the summands or in the signs of the square roots are counted as different. D(n) – Arithmetic derivative Using the Heaviside notation for the derivative, the arithmetic derivative D(n) is a function such that if n prime, and (the product rule) Summation functions Given an arithmetic function a(n), its summation function A(x) is defined by A can be regarded as a function of a real variable. Given a positive integer m, A is constant along open intervals m < x < m + 1, and has a jump discontinuity at each integer for which a(m) ≠ 0. Since such functions are often represented by series and integrals, to achieve pointwise convergence it is usual to define the value at the discontinuities as the average of the values to the left and right: Individual values of arithmetic functions may fluctuate wildly – as in most of the above examples. Summation functions "smooth out" these fluctuations. In some cases it may be possible to find asymptotic behaviour for the summation function for large x. A classical example of this phenomenon is given by the divisor summatory function, the summation function of d(n), the number of divisors of n: An average order of an arithmetic function is some simpler or better-understood function which has the same summation function asymptotically, and hence takes the same values "on average". We say that g is an average order of f if as x tends to infinity. The example above shows that d(n) has the average order log(n). Dirichlet convolution Given an arithmetic function a(n), let Fa(s), for complex s, be the function defined by the corresponding Dirichlet series (where it converges): Fa(s) is called a generating function of a(n). The simplest such series, corresponding to the constant function a(n) = 1 for all n, is ζ(s) the Riemann zeta function. The generating function of the Möbius function is the inverse of the zeta function: Consider two arithmetic functions a and b and their respective generating functions Fa(s) and Fb(s). The product Fa(s)Fb(s) can be computed as follows: It is a straightforward exercise to show that if c(n) is defined by then This function c is called the Dirichlet convolution of a and b, and is denoted by . A particularly important case is convolution with the constant function a(n) = 1 for all n, corresponding to multiplying the generating function by the zeta function: Multiplying by the inverse of the zeta function gives the Möbius inversion formula: If f is multiplicative, then so is g. If f is completely multiplicative, then g is multiplicative, but may or may not be completely multiplicative. Relations among the functions There are a great many formulas connecting arithmetical functions with each other and with the functions of analysis, especially powers, roots, and the exponential and log functions. The page divisor sum identities contains many more generalized and related examples of identities involving arithmetic functions. Here are a few examples: Dirichlet convolutions     where λ is the Liouville function.             Möbius inversion             Möbius inversion                         Möbius inversion             Möbius inversion             Möbius inversion           where λ is the Liouville function.             Möbius inversion Sums of squares For all     (Lagrange's four-square theorem). where the Kronecker symbol has the values There is a formula for r3 in the section on class numbers below. where .     where Define the function as That is, if n is odd, is the sum of the kth powers of the divisors of n, that is, and if n is even it is the sum of the kth powers of the even divisors of n minus the sum of the kth powers of the odd divisors of n.     Adopt the convention that Ramanujan's if x is not an integer. Divisor sum convolutions Here "convolution" does not mean "Dirichlet convolution" but instead refers to the formula for the coefficients of the product of two power series: The sequence is called the convolution or the Cauchy product of the sequences an and bn. These formulas may be proved analytically (see Eisenstein series) or by elementary methods.                     where τ(n) is Ramanujan's function.     Since σk(n) (for natural number k) and τ(n) are integers, the above formulas can be used to prove congruences for the functions. See Ramanujan tau function for some examples. Extend the domain of the partition function by setting       This recurrence can be used to compute p(n). Class number related Peter Gustav Lejeune Dirichlet discovered formulas that relate the class number h of quadratic number fields to the Jacobi symbol. An integer D is called a fundamental discriminant if it is the discriminant of a quadratic number field. This is equivalent to D ≠ 1 and either a) D is squarefree and D ≡ 1 (mod 4) or b) D ≡ 0 (mod 4), D/4 is squarefree, and D/4 ≡ 2 or 3 (mod 4). Extend the Jacobi symbol to accept even numbers in the "denominator" by defining the Kronecker symbol: Then if D < −4 is a fundamental discriminant There is also a formula relating r3 and h. Again, let D be a fundamental discriminant, D < −4. Then Prime-count related Let   be the nth harmonic number. Then   is true for every natural number n if and only if the Riemann hypothesis is true.     The Riemann hypothesis is also equivalent to the statement that, for all n > 5040, (where γ is the Euler–Mascheroni constant). This is Robin's theorem. Menon's identity In 1965 P Kesava Menon proved This has been generalized by a number of mathematicians. For example, B. Sury N. Rao where a1, a2, ..., as are integers, gcd(a1, a2, ..., as, n) = 1. László Fejes Tóth where m1 and m2 are odd, m = lcm(m1, m2). In fact, if f is any arithmetical function where stands for Dirichlet convolution. Miscellaneous Let m and n be distinct, odd, and positive. Then the Jacobi symbol satisfies the law of quadratic reciprocity: Let D(n) be the arithmetic derivative. Then the logarithmic derivative See Arithmetic derivative for details. Let λ(n) be Liouville's function. Then     and     Let λ(n) be Carmichael's function. Then     Further, See Multiplicative group of integers modulo n and Primitive root modulo n.                   Note that             Compare this with             where τ(n) is Ramanujan's function. First 100 values of some arithmetic functions Notes References Further reading External links Matthew Holden, Michael Orrison, Michael Varble Yet another Generalization of Euler's Totient Function Huard, Ou, Spearman, and Williams. Elementary Evaluation of Certain Convolution Sums Involving Divisor Functions Dineva, Rosica, The Euler Totient, the Möbius, and the Divisor Functions László Tóth, Menon's Identity and arithmetical sums representing functions of several variables Functions and mappings
Arithmetic function
[ "Mathematics" ]
3,721
[ "Mathematical analysis", "Functions and mappings", "Arithmetic functions", "Mathematical objects", "Mathematical relations", "Number theory" ]
3,172
https://en.wikipedia.org/wiki/ANSI%20C
ANSI C, ISO C, and Standard C are successive standards for the C programming language published by the American National Standards Institute (ANSI) and ISO/IEC JTC 1/SC 22/WG 14 of the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC). Historically, the names referred specifically to the original and best-supported version of the standard (known as C89 or C90). Software developers writing in C are encouraged to conform to the standards, as doing so helps portability between compilers. History and outlook The first standard for C was published by ANSI. Although this document was subsequently adopted by ISO/IEC and subsequent revisions published by ISO/IEC have been adopted by ANSI, "ANSI C" is still used to refer to the standard. While some software developers use the term ISO C, others are standards-body neutral and use Standard C. Informal specification: K&R C (C78) Informal specification in 1978 (Brian Kernighan and Dennis Ritchie book The C Programming Language). Standardizing C In 1983, the American National Standards Institute formed a committee, X3J11, to establish a standard specification of C. In 1985, the first Standard Draft was released, sometimes referred to as C85. In 1986, another Draft Standard was released, sometimes referred to as C86. The prerelease Standard C was published in 1988, and sometimes referred to as C88. C89 The ANSI standard was completed in 1989 and ratified as ANSI X3.159-1989 "Programming Language C." This version of the language is often referred to as "ANSI C". Later on sometimes the label "C89" is used to distinguish it from C90 but using the same labeling method. C90 The same standard as C89 was ratified by ISO/IEC as ISO/IEC 9899:1990, with only formatting changes, which is sometimes referred to as C90. Therefore, the terms "C89" and "C90" refer to a language that is virtually identical. This standard has been withdrawn by both ANSI/INCITS and ISO/IEC. C95 In 1995, the ISO/IEC published an extension, called Amendment 1, for the C standard. Its full name finally was ISO/IEC 9899:1990/AMD1:1995 or nicknamed C95. Aside from error correction there were further changes to the language capabilities, such as: Improved multi-byte and wide character support in the standard library, introducing <wchar.h> and <wctype.h> as well as multi-byte I/O Addition of digraphs to the language Specification of standard macros for the alternative specification of operators, e.g. and for && Specification of the standard macro __STDC_VERSION__ In addition to the amendment, two technical corrigenda were published by ISO for C90: ISO/IEC 9899:1990/Cor 1:1994 TCOR1 in 1994 ISO/IEC 9899:1990/Cor 2:1996 in 1996 Preprocessor test for C95 compatibility #if defined(__STDC_VERSION__) && __STDC_VERSION__ >= 199409L /* C95 compatible source code. */ #elif defined() /* C89 compatible source code. */ #endif C99 In March 2000, ANSI adopted the ISO/IEC 9899:1999 standard. This standard is commonly referred to as C99. Some notable additions to the previous standard include: New built-in data types: long long, _Bool, _Complex, and _Imaginary Several new core language features, including static array indices, designated initializers, compound literals, variable-length arrays, flexible array members, variadic macros, and restrict keyword Several new library headers, including stdint.h, <tgmath.h>, fenv.h, <complex.h> Improved compatibility with several C++ features, including inline functions, single-line comments with //, mixing declarations and code, and universal character names in identifiers Removed several dangerous C89 language features such as implicit function declarations and implicit int Three technical corrigenda were published by ISO for C99: ISO/IEC 9899:1999/Cor 1:2001(E) ISO/IEC 9899:1999/Cor 2:2004(E) ISO/IEC 9899:1999/Cor 3:2007(E), notable for deprecating the standard library function gets This standard has been withdrawn by both ANSI/INCITS and ISO/IEC in favour of C11. C11 C11 was officially ratified and published on December 8, 2011. Notable features include improved Unicode support, type-generic expressions using the new _Generic keyword, a cross-platform multi-threading API (threads.h), and atomic types support in both core language and the library (stdatomic.h). One technical corrigendum has been published by ISO for C11: ISO/IEC 9899:2011/Cor 1:2012 C17 C17 was published in June 2018. Rather than introducing new language features, it only addresses defects in C11. C23 C23 was published in October 2024, and is the current standard for the C programming language. Other related ISO publications As part of the standardization process, ISO/IEC also publishes technical reports and specifications related to the C language: ISO/IEC TR 19769:2004, on library extensions to support Unicode transformation formats, integrated into C11 ISO/IEC TR 24731-1:2007, on library extensions to support bounds-checked interfaces, integrated into C11 ISO/IEC TR 18037:2008, on embedded C extensions ISO/IEC TR 24732:2009, on decimal floating point arithmetic, superseded by ISO/IEC TS 18661-2:2015 ISO/IEC TR 24747:2009, on special mathematical functions, ISO/IEC TR 24731-2:2010, on library extensions to support dynamic allocation functions ISO/IEC TS 17961:2013, on secure coding in C ISO/IEC TS 18661-1:2014, on IEC 60559:2011-compatible binary floating-point arithmetic ISO/IEC TS 18661-2:2015, on IEC 60559:2011-compatible decimal floating point arithmetic ISO/IEC TS 18661-3:2015, on IEC 60559:2011-compatible interchange and extended floating-point types ISO/IEC TS 18661-4:2015, on IEC 60559:2011-compatible supplementary functions More technical specifications are in development and pending approval, including the fifth and final part of TS 18661, a software transactional memory specification, and parallel library extensions. Support from major compilers ANSI C is now supported by almost all the widely used compilers. GCC and Clang are two major C compilers popular today, both based on the C11 with updates including changes from later specifications such as C17. Any source code written only in standard C and without any hardware dependent assumptions is virtually guaranteed to compile correctly on any platform with a conforming C implementation. Without such precautions, most programs may compile only on a certain platform or with a particular compiler, due, for example, to the use of non-standard libraries, such as GUI libraries, or to the reliance on compiler- or platform-specific attributes such as the exact size of certain data types and byte endianness. Compliance detectability To mitigate the differences between K&R C and the ANSI C standard, the ("standard c") macro can be used to split code into ANSI and K&R sections. #if defined() && extern int getopt(int, char * const *, const char *); #else extern int getopt(); #endif In the above example, a prototype is used in a function declaration for ANSI compliant implementations, while an obsolescent non-prototype declaration is used otherwise. Those are still ANSI-compliant as of C99. Note how this code checks both definition and evaluation: this is because some implementations may set to zero to indicate non-ANSI compliance. Compiler support List of compilers supporting ANSI C: Acornsoft ANSI C (first version in 1988, revised in 1989) Amsterdam Compiler Kit (C K&R and C89/90) ARM RealView Clang, using LLVM backend GCC (full C89/90, C99 and C11) HP C/ANSI C compiler (C89 and C99) IBM XL C/C++ (C11, starting with version 12.1) Intel's ICC LabWindows/CVI LCC Oracle Developer Studio OpenWatcom (C89/90 and some C99) Microsoft Visual C++ (C89/90 and some C99) Pelles C (C99 and C11. Windows only.) vbcc (C89/90 and C99) Tiny C Compiler (C89/90 and some C99) See also Behavioral Description Language Compatibility of C and C++ C++23, C++20, C++17, C++14, C++11, C++03, C++98, versions of the C++ programming language standard C++ Technical Report 1 References Further reading External links ISO C working group Draft ANSI C Standard (ANSI X3J11/88-090) (May 13, 1988), Third Public Review Draft ANSI C Rationale (ANSI X3J11/88-151) (Nov 18, 1988) C Information Bulletin #1 (ANSI X3J11/93-007) (May 27, 1992) ANSI C Yacc grammar ANSI C grammar, Lex specification American National Standards Institute standards C (programming language) Programming language standards
ANSI C
[ "Technology" ]
2,113
[ "American National Standards Institute standards", "Computer standards", "Programming language standards" ]
3,189
https://en.wikipedia.org/wiki/Ascending%20chain%20condition
In mathematics, the ascending chain condition (ACC) and descending chain condition (DCC) are finiteness properties satisfied by some algebraic structures, most importantly ideals in certain commutative rings. These conditions played an important role in the development of the structure theory of commutative rings in the works of David Hilbert, Emmy Noether, and Emil Artin. The conditions themselves can be stated in an abstract form, so that they make sense for any partially ordered set. This point of view is useful in abstract algebraic dimension theory due to Gabriel and Rentschler. Definition A partially ordered set (poset) P is said to satisfy the ascending chain condition (ACC) if no infinite strictly ascending sequence of elements of P exists. Equivalently, every weakly ascending sequence of elements of P eventually stabilizes, meaning that there exists a positive integer n such that Similarly, P is said to satisfy the descending chain condition (DCC) if there is no infinite strictly descending chain of elements of P. Equivalently, every weakly descending sequence of elements of P eventually stabilizes. Comments Assuming the axiom of dependent choice, the descending chain condition on (possibly infinite) poset P is equivalent to P being well-founded: every nonempty subset of P has a minimal element (also called the minimal condition or minimum condition). A totally ordered set that is well-founded is a well-ordered set. Similarly, the ascending chain condition is equivalent to P being converse well-founded (again, assuming dependent choice): every nonempty subset of P has a maximal element (the maximal condition or maximum condition). Every finite poset satisfies both the ascending and descending chain conditions, and thus is both well-founded and converse well-founded. Example Consider the ring of integers. Each ideal of consists of all multiples of some number . For example, the ideal consists of all multiples of . Let be the ideal consisting of all multiples of . The ideal is contained inside the ideal , since every multiple of is also a multiple of . In turn, the ideal is contained in the ideal , since every multiple of is a multiple of . However, at this point there is no larger ideal; we have "topped out" at . In general, if are ideals of such that is contained in , is contained in , and so on, then there is some for which all . That is, after some point all the ideals are equal to each other. Therefore, the ideals of satisfy the ascending chain condition, where ideals are ordered by set inclusion. Hence is a Noetherian ring. See also Artinian Ascending chain condition for principal ideals Krull dimension Maximal condition on congruences Noetherian Notes Citations References External links Commutative algebra Order theory Wellfoundedness
Ascending chain condition
[ "Mathematics" ]
564
[ "Wellfoundedness", "Fields of abstract algebra", "Order theory", "Mathematical induction", "Commutative algebra" ]
3,211
https://en.wikipedia.org/wiki/Atom%20probe
The atom probe was introduced at the 14th Field Emission Symposium in 1967 by Erwin Wilhelm Müller and J. A. Panitz. It combined a field ion microscope with a mass spectrometer having a single particle detection capability and, for the first time, an instrument could “... determine the nature of one single atom seen on a metal surface and selected from neighboring atoms at the discretion of the observer”. Atom probes are unlike conventional optical or electron microscopes, in that the magnification effect comes from the magnification provided by a highly curved electric field, rather than by the manipulation of radiation paths. The method is destructive in nature removing ions from a sample surface in order to image and identify them, generating magnifications sufficient to observe individual atoms as they are removed from the sample surface. Through coupling of this magnification method with time of flight mass spectrometry, ions evaporated by application of electric pulses can have their mass-to-charge ratio computed. Through successive evaporation of material, layers of atoms are removed from a specimen, allowing for probing not only of the surface, but also through the material itself. Computer methods are used to rebuild a three-dimensional view of the sample, prior to it being evaporated, providing atomic scale information on the structure of a sample, as well as providing the type atomic species information. The instrument allows the three-dimensional reconstruction of up to billions of atoms from a sharp tip (corresponding to specimen volumes of 10,000-10,000,000 nm3). Overview Atom probe samples are shaped to implicitly provide a highly curved electric potential to induce the resultant magnification, as opposed to direct use of lensing, such as via magnetic lenses. Furthermore, in normal operation (as opposed to a field ionization modes) the atom probe does not utilize a secondary source to probe the sample. Rather, the sample is evaporated in a controlled manner (field evaporation) and the evaporated ions are impacted onto a detector, which is typically 10 to 100 cm away. The samples are required to have a needle geometry and are produced by similar techniques as TEM sample preparation electropolishing, or focused ion beam methods. Since 2006, commercial systems with laser pulsing have become available and this has expanded applications from metallic only specimens into semiconducting, insulating such as ceramics, and even geological materials. Preparation is done, often by hand, to manufacture a tip radius sufficient to induce a high electric field, with radii on the order of 100 nm. To conduct an atom probe experiment a very sharp needle shaped specimen is placed in an ultra high vacuum chamber. After introduction into the vacuum system, the sample is reduced to cryogenic temperatures (typically 20-100 K) and manipulated such that the needle's point is aimed towards an ion detector. A high voltage is applied to the specimen, and either a laser pulse is applied to the specimen or a voltage pulse (typically 1-2 kV) with pulse repetition rates in the hundreds of kilohertz range is applied to a counter electrode. The application of the pulse to the sample allows for individual atoms at the sample surface to be ejected as an ion from the sample surface at a known time. Typically the pulse amplitude and the high voltage on the specimen are computer controlled to encourage only one atom to ionize at a time, but multiple ionizations are possible. The delay between application of the pulse and detection of the ion(s) at the detector allow for the computation of a mass-to-charge ratio. Whilst the uncertainty in the atomic mass computed by time-of-flight methods in atom probe is sufficiently small to allow for detection of individual isotopes within a material this uncertainty may still, in some cases, confound definitive identification of atomic species. Effects such as superposition of differing ions with multiple electrons removed, or through the presence of complex species formation during evaporation may cause two or more species to have sufficiently close time-of-flights to make definitive identification impossible. History Field ion microscopy Field ion microscopy is a modification of field emission microscopy where a stream of tunneling electrons is emitted from the apex of a sharp needle-like tip cathode when subjected to a sufficiently high electric field (~3-6 V/nm). The needle is oriented towards a phosphor screen to create a projected image of the work function at the tip apex. The image resolution is limited to (2-2.5 nm), due to quantum mechanical effects and lateral variations in the electron velocity. In field ion microscopy, the tip is cooled by a cryogen and its polarity is reversed. When an imaging gas (usually hydrogen or helium) is introduced at low pressures (< 0.1 Pascal) gas ions in the high electric field at the tip apex are field ionized and produce a projected image of protruding atoms at the tip apex. The image resolution is determined primarily by the temperature of the tip but even at 78 Kelvin atomic resolution is achieved. 10-cm Atom Probe The 10-cm Atom Probe, invented in 1973 by J. A. Panitz was a “new and simple atom probe which permits rapid, in depth species identification or the more usual atom-by atom analysis provided by its predecessors ... in an instrument having a volume of less than two liters in which tip movement is unnecessary and the problems of evaporation pulse stability and alignment common to previous designs have been eliminated.” This was accomplished by combining a time of flight (TOF) mass spectrometer with a proximity focussed, dual channel plate detector, an 11.8 cm drift region and a 38° field of view. An FIM image or a desorption image of the atoms removed from the apex of a field emitter tip could be obtained. The 10-cm Atom Probe has been called the progenitor of later atom probes including the commercial instruments. Imaging Atom Probe The Imaging Atom-Probe (IAP) was introduced in 1974 by J. A. Panitz. It incorporated the features of the 10-cm Atom-Probe yet “... departs completely from [previous] atom probe philosophy. Rather than attempt to determine the identity of a surface species producing a preselected ion-image spot, we wish to determine the complete crystallographic distribution of a surface species of preselected mass-to-charge ratio. Now suppose that instead of operating the [detector] continuously, it is turned on for a short time coincidentally with the arrival of a preselected species of interest by applying a gate pulse a time T after the evaporation pulse has reached the specimen. If the duration of the gate pulse is shorter than the travel time between adjacent species, only that surface species having the unique travel time T will be detected and its complete crystallographic distribution displayed.” It was patented in 1975 as the Field Desorption Spectrometer. The Imaging Atom-Probe moniker was coined by A. J. Waugh in 1978 and the instrument was described in detail by J. A. Panitz in the same year. Atom Probe Tomography (APT) Modern day atom probe tomography uses a position sensitive detector aka a FIM in a box to deduce the lateral location of atoms. The idea of the APT, inspired by J. A. Panitz's Field Desorption Spectrometer patent, was developed by Mike Miller starting in 1983 and culminated with the first prototype in 1986. Various refinements were made to the instrument, including the use of a so-called position-sensitive (PoS) detector by Alfred Cerezo, Terence Godfrey, and George D. W. Smith at Oxford University in 1988. The Tomographic Atom Probe (TAP), developed by researchers at the University of Rouen in France in 1993, introduced a multichannel timing system and multianode array. Both instruments (PoSAP and TAP) were commercialized by Oxford Nanoscience and CAMECA respectively. Since then, there have been many refinements to increase the field of view, mass and position resolution, and data acquisition rate of the instrument. The Local Electrode Atom Probe was first introduced in 2003 by Imago Scientific Instruments. In 2005, the commercialization of the pulsed laser atom probe (PLAP) expanded the avenues of research from highly conductive materials (metals) to poor conductors (semiconductors like silicon) and even insulating materials. AMETEK acquired CAMECA in 2007 and Imago Scientific Instruments (Madison, WI) in 2010, making the company the sole commercial developer of APTs with more than 110 instruments installed around the world in 2019. The first few decades of work with APT focused on metals. However, with the introduction of the laser pulsed atom probe systems applications have expanded to semiconductors, ceramic and geologic materials, with some work on biomaterials. The most advanced study of biological material to date using APT involved analyzing the chemical structure of teeth of the radula of chiton Chaetopleura apiculata. In this study, the use of APT showed chemical maps of organic fibers in the surrounding nano-crystalline magnetite in the chiton teeth, fibers which were often co-located with sodium or magnesium. This has been furthered to study elephant tusks, dentin and human enamel. Theory Field evaporation Field evaporation is an effect that can occur when an atom bonded at the surface of a material is in the presence of a sufficiently high and appropriately directed electric field, where the electric field is the differential of electric potential (voltage) with respect to distance. Once this condition is met, it is sufficient that local bonding at the specimen surface is capable of being overcome by the field, allowing for evaporation of an atom from the surface to which it is otherwise bonded. Ion flight Whether evaporated from the material itself, or ionised from the gas, the ions that are evaporated are accelerated by electrostatic force, acquiring most of their energy within a few tip-radii of the sample. Subsequently, the accelerative force on any given ion is controlled by the electrostatic equation, where n is the ionisation state of the ion, and e is the fundamental electric charge. This can be equated with the mass of the ion, m, via Newton's law (F=ma): Relativistic effects in the ion flight are usually ignored, as realisable ion speeds are only a very small fraction of the speed of light. Assuming that the ion is accelerated during a very short interval, the ion can be assumed to be travelling at constant velocity. As the ion will travel from the tip at voltage V1 to some nominal ground potential, the speed at which the ion is travelling can be estimated by the energy transferred into the ion during (or near) ionisation. Therefore, the ion speed can be computed with the following equation, which relates kinetic energy to energy gain due to the electric field, the negative arising from the loss of electrons forming a net positive charge. Where U is the ion velocity. Solving for U, the following relation is found: Let's say that for at a certain ionization voltage, a singly charged hydrogen ion acquires a resulting velocity of 1.4x10^6 ms−1 at 10~kV. A singly charged deuterium ion under the sample conditions would have acquired roughly 1.4x10^6/1.41 ms−1. If a detector was placed at a distance of 1 m, the ion flight times would be 1/1.4x10^6 and 1.41/1.4x10^6 s. Thus, the time of the ion arrival can be used to infer the ion type itself, if the evaporation time is known. From the above equation, it can be re-arranged to show that given a known flight distance. F, for the ion, and a known flight time, t, and thus one can substitute these values to obtain the mass-to-charge for the ion. Thus for an ion which traverses a 1 m flight path, across a time of 2000 ns, given an initial accelerating voltage of 5000 V (V in Si units is kg.m^2.s^-3.A^-1) and noting that one amu is 1×10−27 kg, the mass-to-charge ratio (more correctly the mass-to-ionisation value ratio) becomes ~3.86 amu/charge. The number of electrons removed, and thus net positive charge on the ion is not known directly, but can be inferred from the histogram (spectrum) of observed ions. Magnification The magnification in an atom is due to the projection of ions radially away from the small, sharp tip. Subsequently, in the far-field, the ions will be greatly magnified. This magnification is sufficient to observe field variations due to individual atoms, thus allowing in field ion and field evaporation modes for the imaging of single atoms. The standard projection model for the atom probe is an emitter geometry that is based upon a revolution of a conic section, such as a sphere, hyperboloid or paraboloid. For these tip models, solutions to the field may be approximated or obtained analytically. The magnification for a spherical emitter is inversely proportional to the radius of the tip, given a projection directly onto a spherical screen, the following equation can be obtained geometrically. Where rscreen is the radius of the detection screen from the tip centre, and rtip the tip radius. A practical tip to screen distances may range from several centimeters to several meters, with increased detector area required at larger to subtend the same field of view. Practically speaking, the usable magnification will be limited by several effects, such as lateral vibration of the atoms prior to evaporation. Whilst the magnification of both the field ion and atom probe microscopes is extremely high, the exact magnification is dependent upon conditions specific to the examined specimen, so unlike for conventional electron microscopes, there is often little direct control on magnification, and furthermore, obtained images may have strongly variable magnifications due to fluctuations in the shape of the electric field at the surface. Reconstruction The computational conversion of the ion sequence data, as obtained from a position-sensitive detector to a three-dimensional visualisation of atomic types, is termed "reconstruction". Reconstruction algorithms are typically geometrically based and have several literature formulations. Most models for reconstruction assume that the tip is a spherical object, and use empirical corrections to stereographic projection to convert detector positions back to a 2D surface embedded in 3D space, R3. By sweeping this surface through R3 as a function of the ion sequence input data, such as via ion-ordering, a volume is generated onto which positions the 2D detector positions can be computed and placed three-dimensional space. Typically the sweep takes the simple form of advancement of the surface, such that the surface is expanded in a symmetric manner about its advancement axis, with the advancement rate set by a volume attributed to each ion detected and identified. This causes the final reconstructed volume to assume a rounded-conical shape, similar to a badminton shuttlecock. The detected events thus become a point cloud data with attributed experimentally measured values, such as ion time of flight or experimentally derived quantities, e.g. time of flight or detector data. This form of data manipulation allows for rapid computer visualisation and analysis, with data presented as point cloud data with additional information, such as each ion's mass to charge (as computed from the velocity equation above), voltage or other auxiliary measured quantity or computation therefrom. Data features The canonical feature of atom probe data, is its high spatial resolution in the direction through the material, which has been attributed to an ordered evaporation sequence. This data can therefore image near atomically sharp buried interfaces with the associated chemical information. The data obtained from the evaporative process is however not without artefacts that form the physical evaporation or ionisation process. A key feature of the evaporation or field ion images is that the data density is highly inhomogeneous, due to the corrugation of the specimen surface at the atomic scale. This corrugation gives rise to strong electric field gradients in the near-tip zone (on the order of an atomic radii or less from the tip), which during ionisation deflects ions away from the electric field normal. The resultant deflection means that in these regions of high curvature, atomic terraces are belied by a strong anisotropy in the detection density. Where this occurs due to a few atoms on a surface is usually referred to as a "pole", as these are coincident with the crystallographic axes of the specimen (FCC, BCC, HCP) etc. Where the edges of an atomic terrace causes deflection, a low density line is formed and is termed a "zone line". These poles and zone-lines, whilst inducing fluctuations in data density in the reconstructed datasets, which can prove problematic during post-analysis, are critical for determining information such as angular magnification, as the crystallographic relationships between features are typically well known. When reconstructing the data, owing to the evaporation of successive layers of material from the sample, the lateral and in-depth reconstruction values are highly anisotropic. Determination of the exact resolution of the instrument is of limited use, as the resolution of the device is set by the physical properties of the material under analysis. Systems Many designs have been constructed since the method's inception. Initial field ion microscopes, precursors to modern atom probes, were usually glass blown devices developed by individual research laboratories. System layout At a minimum, an atom probe will consist of several key pieces of equipment. A vacuum system for maintaining the low pressures (~10−8 to 10−10 Pa) required, typically a classic 3 chambered UHV design. A system for the manipulation of samples inside the vacuum, including sample viewing systems. A cooling system to reduce atomic motion, such as a helium refrigeration circuit - providing sample temperatures as low as 15K. A high voltage system to raise the sample standing voltage near the threshold for field evaporation. A high voltage pulsing system, use to create timed field evaporation events A counter electrode that can be a simple disk shape (like earlier generation atom probes), or a cone-shaped Local Electrode. The voltage pulse (negative) is typically applied to the counter electrode. A detection system for single energetic ions that includes XY position and TOF information. Optionally, an atom probe may also include laser-optical systems for laser beam targeting and pulsing, if using laser-evaporation methods. In-situ reaction systems, heaters, or plasma treatment may also be employed for some studies as well as a pure noble gas introduction for FIM. Performance Collectable ion volumes were previously limited to several thousand, or tens of thousands of ionic events. Subsequent electronics and instrumentation development has increased the rate of data accumulation, with datasets of hundreds of million atoms (dataset volumes of 107 nm3). Data collection times vary considerably depending upon the experimental conditions and the number of ions collected. Experiments take from a few minutes, to many hours to complete. Applications Metallurgy Atom probe has typically been employed in the chemical analysis of alloy systems at the atomic level. This has arisen as a result of voltage pulsed atom probes providing good chemical and sufficient spatial information in these materials. Metal samples from large grained alloys may be simple to fabricate, particularly from wire samples, with hand-electropolishing techniques giving good results. Subsequently, atom probe has been used in the analysis of the chemical composition of a wide range of alloys. Such data is critical in determining the effect of alloy constituents in a bulk material, identification of solid-state reaction features, such as solid phase precipitates. Such information may not be amenable to analysis by other means (e.g. TEM) owing to the difficulty in generating a three-dimensional dataset with composition. Semiconductors Semi-conductor materials are often analysable in atom probe, however sample preparation may be more difficult, and interpretation of results may be more complex, particularly if the semi-conductor contains phases which evaporate at differing electric field strengths. Applications such as ion implantation may be used to identify the distribution of dopants inside a semi-conducting material, which is increasingly critical in the correct design of modern nanometre scale electronics. Limitations Materials implicitly control achievable spatial resolution. Specimen geometry during the analysis is uncontrolled, yet controls projection behaviour, hence there is little control over the magnification. This induces distortions into the computer generated 3D dataset. Features of interest might evaporate in a physically different manner to the bulk sample, altering projection geometry and the magnification of the reconstructed volume. This yields strong spatial distortions in the final image. Volume selectability can be limited. Site specific preparation methods, e.g. using Focussed ion beam preparation, although more time-consuming, may be used to bypass such limitations. Ion overlap in some samples (e.g. between oxygen and sulfur) resulted in ambiguous analysed species. This may be mitigated by selection of experiment temperature or laser input energy to influence the ionisation number (+, ++, 3+ etc.) of the ionised groups. Data analysis can be used in some cases to statistically recover overlaps. Low molecular weight gases (Hydrogen & Helium) may be difficult to be removed from the analysis chamber, and may be adsorbed and emitted from the specimen, even though not present in the original specimen. This may also limit identification of Hydrogen in some samples. For this reason, deuterated samples have been used to overcome limitations. Results may be contingent on the parameters used to convert the 2D detected data into 3D. In more problematic materials, correct reconstruction may not be done, due to limited knowledge of the true magnification; particularly if zone or pole regions cannot be observed. References Further reading Michael K. Miller, George D.W. Smith, Alfred Cerezo, Mark G. Hetherington (1996) Atom Probe Field Ion Microscopy Monographs on the Physics and Chemistry of Materials, Oxford: Oxford University Press. . Michael K. Miller (2000) Atom Probe Tomography: Analysis at the Atomic Level. New York: Kluwer Academic. Baptiste Gault, Michael P. Moody, Julie M. Cairney, SImon P. Ringer (2012) Atom Probe Microscopy, Springer Series in Materials Science, Vol. 160, New York: Springer. David J. Larson, Ty J. Prosa, Robert M. Ulfig, Brian P. Geiser, Thomas F. Kelly (2013) Local Electrode Atom Probe Tomography - A User's Guide, Springer Characterization & Evaluation of Materials, New York: Springer. External links Video demonstrating Field Ion images, and pulsed ion evaporation www.atomprobe.com - A CAMECA provided community resource with contact information and an interactive FAQ MyScope Atom Probe Tomography - An online learning environment for those who want to learn about atom probe provided by Microscopy Australia Scientific techniques Microscopes Nanotechnology
Atom probe
[ "Chemistry", "Materials_science", "Technology", "Engineering" ]
4,793
[ "Materials science", "Measuring instruments", "Microscopes", "Microscopy", "Nanotechnology" ]
3,214
https://en.wikipedia.org/wiki/Amplifier%20figures%20of%20merit
In electronics, the figures of merit of an amplifier are numerical measures that characterize its properties and performance. Figures of merit can be given as a list of specifications that include properties such as gain, bandwidth, noise and linearity, among others listed in this article. Figures of merit are important for determining the suitability of a particular amplifier for an intended use. Gain The gain of an amplifier is the ratio of output to input power or amplitude, and is usually measured in decibels. When measured in decibels it is logarithmically related to the power ratio: G(dB)=10 log(Pout /Pin). RF amplifiers are often specified in terms of the maximum power gain obtainable, while the voltage gain of audio amplifiers and instrumentation amplifiers will be more often specified. For example, an audio amplifier with a gain given as 20 dB will have a voltage gain of ten. The use of voltage gain figure is appropriate when the amplifier's input impedance is much higher than the source impedance, and the load impedance higher than the amplifier's output impedance. If two equivalent amplifiers are being compared, the amplifier with higher gain settings would be more sensitive as it would take less input signal to produce a given amount of power. Bandwidth The bandwidth of an amplifier is the range of frequencies for which the amplifier gives "satisfactory performance". The definition of "satisfactory performance" may be different for different applications. However, a common and well-accepted metric is the half-power points (i.e. frequency where the power goes down by half its peak value) on the output vs. frequency curve. Therefore, bandwidth can be defined as the difference between the lower and upper half power points. This is therefore also known as the bandwidth. Bandwidths (otherwise called "frequency responses") for other response tolerances are sometimes quoted (, etc.) or "plus or minus 1dB" (roughly the sound level difference people usually can detect). The gain of a good quality full-range audio amplifier will be essentially flat between 20 Hz to about 20 kHz (the range of normal human hearing). In ultra-high-fidelity amplifier design, the amplifier's frequency response should extend considerably beyond this (one or more octaves either side) and might have points < 10 Hz and > . Professional touring amplifiers often have input and/or output filtering to sharply limit frequency response beyond ; too much of the amplifier's potential output power would otherwise be wasted on infrasonic and ultrasonic frequencies, and the danger of AM radio interference would increase. Modern switching amplifiers need steep low pass filtering at the output to get rid of high-frequency switching noise and harmonics. The range of frequency over which the gain is equal to or greater than 70.7% of its maximum gain is termed as bandwidth. Efficiency Efficiency is a measure of how much of the power source is usefully applied to the amplifier's output. Class A amplifiers are very inefficient, in the range of 10–20% with a max efficiency of 25% for direct coupling of the output. Inductive coupling of the output can raise their efficiency to a maximum of 50%. Drain efficiency is the ratio of output RF power to input DC power when primary input DC power has been fed to the drain of a field-effect transistor. Based on this definition, the drain efficiency cannot exceed 25% for a class A amplifier that is supplied drain bias current through resistors (because RF signal has its zero level at about 50% of the input DC). Manufacturers specify much higher drain efficiencies, and designers are able to obtain higher efficiencies by providing current to the drain of the transistor through an inductor or a transformer winding. In this case the RF zero level is near the DC rail and will swing both above and below the rail during operation. While the voltage level is above the DC rail current is supplied by the inductor. Class B amplifiers have a very high efficiency but are impractical for audio work because of high levels of distortion (See: Crossover distortion). In practical design, the result of a tradeoff is the class AB design. Modern Class AB amplifiers commonly have peak efficiencies between 30 and 55% in audio systems and 50-70% in radio frequency systems with a theoretical maximum of 78.5%. Commercially available Class D switching amplifiers have reported efficiencies as high as 90%. Amplifiers of Class C-F are usually known to be very high-efficiency amplifiers. RCA manufactured an AM broadcast transmitter employing a single class-C low-mu triode with an RF efficiency in the 90% range. More efficient amplifiers run cooler, and often do not need any cooling fans even in multi-kilowatt designs. The reason for this is that the loss of efficiency produces heat as a by-product of the energy lost during the conversion of power. In more efficient amplifiers there is less loss of energy so in turn less heat. In RF linear Power Amplifiers, such as cellular base stations and broadcast transmitters, special design techniques can be used to improve efficiency. Doherty designs, which use a second output stage as a "peak" amplifier, can lift efficiency from the typical 15% up to 30-35% in a narrow bandwidth. Envelope Tracking designs are able to achieve efficiencies of up to 60%, by modulating the supply voltage to the amplifier in line with the envelope of the signal. Linearity An ideal amplifier would be a totally linear device, but real amplifiers are only linear within limits. When the signal drive to the amplifier is increased, the output also increases until a point is reached where some part of the amplifier becomes saturated and cannot produce any more output; this is called clipping, and results in distortion. In most amplifiers a reduction in gain takes place before hard clipping occurs; the result is a compression effect, which (if the amplifier is an audio amplifier) sounds much less unpleasant to the ear. For these amplifiers, the 1 dB compression point is defined as the input power (or output power) where the gain is 1 dB less than the small signal gain. Sometimes this non linearity is deliberately designed in to reduce the audible unpleasantness of hard clipping under overload. Ill effects of non-linearity can be reduced with negative feedback. Linearization is an emergent field, and there are many techniques, such as feed forward, predistortion, postdistortion, in order to avoid the undesired effects of the non-linearities. Noise This is a measure of how much noise is introduced in the amplification process. Noise is an undesirable but inevitable product of the electronic devices and components; also, much noise results from intentional economies of manufacture and design time. The metric for noise performance of a circuit is noise figure or noise factor. Noise figure is a comparison between the output signal to noise ratio and the thermal noise of the input signal. Output dynamic range Output dynamic range is the range, usually given in dB, between the smallest and largest useful output levels. The lowest useful level is limited by output noise, while the largest is limited most often by distortion. The ratio of these two is quoted as the amplifier dynamic range. More precisely, if S = maximal allowed signal power and N = noise power, the dynamic range DR is DR = (S + N ) /N. In many switched mode amplifiers, dynamic range is limited by the minimum output step size. Slew rate Slew rate is the maximum rate of change of the output, usually quoted in volts per second (or microsecond). Many amplifiers are ultimately slew rate limited (typically by the impedance of a drive current having to overcome capacitive effects at some point in the circuit), which sometimes limits the full power bandwidth to frequencies well below the amplifier's small-signal frequency response. Rise time The rise time, tr, of an amplifier is the time taken for the output to change from 10% to 90% of its final level when driven by a step input. For a Gaussian response system (or a simple RC roll off), the rise time is approximated by: tr * BW = 0.35, where tr is rise time in seconds and BW is bandwidth in Hz. Settling time and ringing The time taken for the output to settle to within a certain percentage of the final value (for instance 0.1%) is called the settling time, and is usually specified for oscilloscope vertical amplifiers and high-accuracy measurement systems. Ringing refers to an output variation that cycles above and below an amplifier's final value and leads to a delay in reaching a stable output. Ringing is the result of overshoot caused by an underdamped circuit. Overshoot In response to a step input, the overshoot is the amount the output exceeds its final, steady-state value. Stability Stability is an issue in all amplifiers with feedback, whether that feedback is added intentionally or results unintentionally. It is especially an issue when applied over multiple amplifying stages. Stability is a major concern in RF and microwave amplifiers. The degree of an amplifier's stability can be quantified by a so-called stability factor. There are several different stability factors, such as the Stern stability factor and the Linvil stability factor, which specify a condition that must be met for the absolute stability of an amplifier in terms of its two-port parameters. See also Audio system measurements Low-noise amplifier References External links Efficiency of Microwave Devices RF Power Amplifier Testing Electronic amplifiers
Amplifier figures of merit
[ "Technology" ]
1,957
[ "Electronic amplifiers", "Amplifiers" ]
3,231
https://en.wikipedia.org/wiki/Absolute%20infinite
The absolute infinite (symbol: Ω), in context often called "absolute", is an extension of the idea of infinity proposed by mathematician Georg Cantor. It can be thought of as a number that is bigger than any other conceivable or inconceivable quantity, either finite or transfinite. Cantor linked the absolute infinite with God, and believed that it had various mathematical properties, including the reflection principle: every property of the absolute infinite is also held by some smaller object. Cantor's view Cantor said: While using the Latin expression in Deo (in God), Cantor identifies absolute infinity with God (GA 175–176, 376, 378, 386, 399). According to Cantor, Absolute Infinity is beyond mathematical comprehension and shall be interpreted in terms of negative theology. Cantor also mentioned the idea in his letters to Richard Dedekind (text in square brackets not present in original): The Burali-Forti paradox The idea that the collection of all ordinal numbers cannot logically exist seems paradoxical to many. This is related to the Burali-Forti's paradox which implies that there can be no greatest ordinal number. All of these problems can be traced back to the idea that, for every property that can be logically defined, there exists a set of all objects that have that property. However, as in Cantor's argument (above), this idea leads to difficulties. More generally, as noted by A. W. Moore, there can be no end to the process of set formation, and thus no such thing as the totality of all sets, or the set hierarchy. Any such totality would itself have to be a set, thus lying somewhere within the hierarchy and thus failing to contain every set. A standard solution to this problem is found in Zermelo set theory, which does not allow the unrestricted formation of sets from arbitrary properties. Rather, we may form the set of all objects that have a given property and lie in some given set (Zermelo's Axiom of Separation). This allows for the formation of sets based on properties, in a limited sense, while (hopefully) preserving the consistency of the theory. While this solves the logical problem, one could argue that the philosophical problem remains. It seems natural that a set of individuals ought to exist, so long as the individuals exist. Indeed, naive set theory might be said to be based on this notion. Although Zermelo's fix allows a class to describe arbitrary (possibly "large") entities, these predicates of the metalanguage may have no formal existence (i.e., as a set) within the theory. For example, the class of all sets would be a proper class. This is philosophically unsatisfying to some and has motivated additional work in set theory and other methods of formalizing the foundations of mathematics such as New Foundations by Willard Van Orman Quine. See also Actual infinity Limitation of size Monadology Reflection principle Absolute (philosophy) Ineffability Notes Bibliography The role of the absolute infinite in Cantor's conception of set Infinity and the Mind, Rudy Rucker, Princeton, New Jersey: Princeton University Press, 1995, ; orig. pub. Boston: Birkhäuser, 1982, . The Infinite, A. W. Moore, London, New York: Routledge, 1990, . Set Theory, Skolem's Paradox and the Tractatus, A. W. Moore, Analysis 45, #1 (January 1985), pp. 13–20. Philosophy of mathematics Infinity Superlatives in religion Conceptions of God
Absolute infinite
[ "Mathematics" ]
751
[ "Mathematical objects", "Infinity", "nan" ]
3,233
https://en.wikipedia.org/wiki/Acceptance%20testing
In engineering and its various subdisciplines, acceptance testing is a test conducted to determine if the requirements of a specification or contract are met. It may involve chemical tests, physical tests, or performance tests. In systems engineering, it may involve black-box testing performed on a system (for example: a piece of software, lots of manufactured mechanical parts, or batches of chemical products) prior to its delivery. In software testing, the ISTQB defines acceptance testing as: The final test in the QA lifecycle, user acceptance testing, is conducted just before the final release to assess whether the product or application can handle real-world scenarios. By replicating user behavior, it checks if the system satisfies business requirements and rejects changes if certain criteria are not met. Some forms of acceptance testing are, user acceptance testing (UAT), end-user testing, operational acceptance testing (OAT), acceptance test-driven development (ATDD) and field (acceptance) testing. Acceptance criteria are the criteria that a system or component must satisfy in order to be accepted by a user, customer, or other authorized entity. Overview Testing is a set of activities conducted to facilitate the discovery and/or evaluation of properties of one or more items under test. Each test, known as a test case, exercises a set of predefined test activities, developed to drive the execution of the test item to meet test objectives; including correct implementation, error identification, quality verification, and other valued details. The test environment is usually designed to be identical, or as close as possible, to the anticipated production environment. It includes all facilities, hardware, software, firmware, procedures, and/or documentation intended for or used to perform the testing of software. UAT and OAT test cases are ideally derived in collaboration with business customers, business analysts, testers, and developers. These tests must include both business logic tests as well as operational environment conditions. The business customers (product owners) are the primary stakeholders of these tests. As the test conditions successfully achieve their acceptance criteria, the stakeholders are reassured the development is progressing in the right direction. User acceptance test (UAT) criteria (in agile software development) are usually created by business customers and expressed in a business domain language. These are high-level tests to verify the completeness of a user story or stories 'played' during any sprint/iteration. Operational acceptance test (OAT) criteria (regardless of using agile, iterative, or sequential development) are defined in terms of functional and non-functional requirements; covering key quality attributes of functional stability, portability, and reliability. Process The acceptance test suite may need to be performed multiple times, as all of the test cases may not be executed within a single test iteration. The acceptance test suite is run using predefined acceptance test procedures to direct the testers on which data to use, the step-by-step processes to follow, and the expected result following execution. The actual results are retained for comparison with the expected results. If the actual results match the expected results for each test case, the test case is said to pass. If the quantity of non-passing test cases does not breach the project's predetermined threshold, the test suite is said to pass. If it does, the system may either be rejected or accepted on conditions previously agreed between the sponsor and the manufacturer. The anticipated result of a successful test execution: test cases are executed, using predetermined data actual results are recorded actual and expected results are compared, and test results are determined. The objective is to provide confidence that the developed product meets both the functional and non-functional requirements. The purpose of conducting acceptance testing is that once completed, and provided the acceptance criteria are met, it is expected the sponsors will sign off on the product development/enhancement as satisfying the defined requirements (previously agreed between business and product provider/developer). User acceptance testing User acceptance testing (UAT) consists of a process of verifying that a solution works for the user. It is not system testing (ensuring software does not crash and meets documented requirements) but rather ensures that the solution will work for the user (i.e. tests that the user accepts the solution); software vendors often refer to this as "Beta testing". This testing should be undertaken by the intended end user, or a subject-matter expert (SME), preferably the owner or client of the solution under test and provide a summary of the findings for confirmation to proceed after trial or review. In software development, UAT as one of the final stages of a project often occurs before a client or customer accepts the new system. Users of the system perform tests in line with what would occur in real-life scenarios. The materials given to the tester must be similar to the materials that the end user will have. Testers should be given real-life scenarios such as the three most common or difficult tasks that the users they represent will undertake. The UAT acts as a final verification of the required business functionality and proper functioning of the system, emulating real-world conditions on behalf of the paying client or a specific large customer. If the software works as required and without issues during normal use, one can reasonably extrapolate the same level of stability in production. User tests, usually performed by clients or by end-users, do not normally focus on identifying simple cosmetic problems such as spelling errors, nor on showstopper defects, such as software crashes; testers and developers identify and fix these issues during earlier unit testing, integration testing, and system testing phases. UAT should be executed against test scenarios. Test scenarios usually differ from System or Functional test cases in that they represent a "player" or "user" journey. The broad nature of the test scenario ensures that the focus is on the journey and not on technical or system-specific details, staying away from "click-by-click" test steps to allow for a variance in users' behavior. Test scenarios can be broken down into logical "days", which are usually where the actor (player/customer/operator) or system (backoffice, front end) changes. In industry, a common UAT is a factory acceptance test (FAT). This test takes place before the installation of the equipment. Most of the time testers not only check that the equipment meets the specification but also that it is fully functional. A FAT usually includes a check of completeness, a verification against contractual requirements, a proof of functionality (either by simulation or a conventional function test), and a final inspection. The results of these tests give clients confidence in how the system will perform in production. There may also be legal or contractual requirements for acceptance of the system. Operational acceptance testing Operational acceptance testing (OAT) is used to conduct operational readiness (pre-release) of a product, service or system as part of a quality management system. OAT is a common type of non-functional software testing, used mainly in software development and software maintenance projects. This type of testing focuses on the operational readiness of the system to be supported, and/or to become part of the production environment. Acceptance testing in extreme programming Acceptance testing is a term used in agile software development methodologies, particularly extreme programming, referring to the functional testing of a user story by the software development team during the implementation phase. The customer specifies scenarios to test when a user story has been correctly implemented. A story can have one or many acceptance tests, whatever it takes to ensure the functionality works. Acceptance tests are black-box system tests. Each acceptance test represents some expected result from the system. Customers are responsible for verifying the correctness of the acceptance tests and reviewing test scores to decide which failed tests are of highest priority. Acceptance tests are also used as regression tests prior to a production release. A user story is not considered complete until it has passed its acceptance tests. This means that new acceptance tests must be created for each iteration, or the development team will report zero progress. Types of acceptance testing Typical types of acceptance testing include the following User acceptance testingThis may include factory acceptance testing (FAT), i.e. the testing done by a vendor before the product or system is moved to its destination site, after which site acceptance testing (SAT) may be performed by the users at the site. Operational acceptance testingAlso known as operational readiness testing, this refers to the checking done to a system to ensure that processes and procedures are in place to allow the system to be used and maintained. This may include checks done to back-up facilities, procedures for disaster recovery, training for end users, maintenance procedures, and security procedures. Contract and regulation acceptance testing In contract acceptance testing, a system is tested against acceptance criteria as documented in a contract, before the system is accepted. In regulation acceptance testing, a system is tested to ensure it meets governmental, legal and safety standards. Factory acceptance testing Acceptance testing conducted at the site at which the product is developed and performed by employees of the supplier organization, to determine whether a component or system satisfies the requirements, normally including hardware as well as software. Alpha and beta testing Alpha testing takes place at developers' sites and involves testing of the operational system by internal staff, before it is released to external customers. Beta testing takes place at customers' sites and involves testing by a group of customers who use the system at their own locations and provide feedback, before the system is released to other customers. The latter is often called "field testing". Acceptance criteria According to the Project Management Institute, acceptance criteria is a "set of conditions that is required to be met before deliverables are accepted." Requirements found in acceptance criteria for a given component of the system are usually very detailed. List of acceptance-testing frameworks Concordion, Specification by example (SbE) framework Concordion.NET, acceptance testing in .NET Cucumber, a behavior-driven development (BDD) acceptance test framework Capybara, Acceptance test framework for Ruby web applications Behat, BDD acceptance framework for PHP Lettuce, BDD acceptance framework for Python Cypress Fabasoft app.test for automated acceptance tests Framework for Integrated Test (Fit) FitNesse, a fork of Fit Gauge (software), Test Automation Framework from Thoughtworks iMacros ItsNat Java Ajax web framework with built-in, server based, functional web testing capabilities. Maveryx Test Automation Framework for functional testing, regression testing, GUI testing, data-driven and codeless testing of Desktop and Web applications. Mocha, a popular web acceptance test framework based on Javascript and Node.js Playwright (software) Ranorex Robot Framework Selenium Specification by example (Specs2) Watir See also Acceptance sampling Conference room pilot Development stage Dynamic testing Engineering validation test Grey box testing Test-driven development White box testing Functional testing (manufacturing) References Sources Further reading External links Acceptance Test Engineering Guide by Microsoft patterns & practices "Using Customer Tests to Drive Development" from Methods & Tools Facilities engineering Software testing Hardware testing Procurement Agile software development
Acceptance testing
[ "Engineering" ]
2,259
[ "Software testing", "Facilities engineering", "Building engineering", "Software engineering", "Mechanical engineering by discipline" ]
3,251
https://en.wikipedia.org/wiki/Amblygonite
Amblygonite () is a fluorophosphate mineral, , composed of lithium, sodium, aluminium, phosphate, fluoride and hydroxide. The mineral occurs in pegmatite deposits and is easily mistaken for albite and other feldspars. Its density, cleavage and flame test for lithium are diagnostic. Amblygonite forms a series with montebrasite, the low fluorine endmember. Geologic occurrence is in granite pegmatites, high-temperature tin veins, and greisens. Amblygonite occurs with spodumene, apatite, lepidolite, tourmaline, and other lithium-bearing minerals in pegmatite veins. It contains about 10% lithium, and has been utilized as a source of lithium. The chief commercial sources have historically been the deposits of California and France. History The mineral was first discovered in Saxony by August Breithaupt in 1817, and named by him from the Greek amblus, blunt, and gonia, angle, because of the obtuse angle between the cleavages. Later it was found at Montebras, Creuse, France, and at Hebron in Maine; and because of slight differences in optical character and chemical composition the names montebrasite and hebronite have been applied to the mineral from these localities.The term amblygonite has been used interchangeably in mining, whether this mineral or montebrasite was extracted. In fact, montebrasite is much more common than amblygonite, which is a rare mineral. It has been discovered in considerable quantity at Pala in San Diego county, California; Caceres, Spain; and the Black Hills of South Dakota. A blue form of amblygonite-montebrasite has been described from Rwanda. The largest documented single crystal of amblygonite measured 7.62 × 2.44 × 1.83 m and weighed about 102 tons. Gemology Transparent amblygonite has been faceted and used as a gemstone. As a gemstone set into jewelry it is vulnerable to breakage and abrasion from general wear, as its hardness and toughness are poor. The main sources for gem material are Brazil and the United States. Australia, France, Germany, Namibia, and Norway, and Spain have also produced gem quality amblygonite. See also List of minerals References Klein, Cornelis and Hurlbut, Cornelius S., 1985, Manual of Mineralogy, 20th ed., p. 362, Mindat with location data Webmineral data Mineral Galleries Sodium minerals Lithium minerals Aluminium minerals Phosphate minerals Triclinic minerals Minerals in space group 2 Gemstones
Amblygonite
[ "Physics" ]
558
[ "Materials", "Gemstones", "Matter" ]
3,252
https://en.wikipedia.org/wiki/Amygdalin
Amygdalin (from Ancient Greek: 'almond') is a naturally occurring chemical compound found in many plants, most notably in the seeds (kernels, pips or stones) of apricots, bitter almonds, apples, peaches, cherries and plums, and in the roots of manioc. Amygdalin is classified as a cyanogenic glycoside, because each amygdalin molecule includes a nitrile group, which can be released as the toxic cyanide anion by the action of a beta-glucosidase. Eating amygdalin will cause it to release cyanide in the human body, and may lead to cyanide poisoning. Since the early 1950s, both amygdalin and a chemical derivative named laetrile have been promoted as alternative cancer treatments, often under the misnomer vitamin B17 (neither amygdalin nor laetrile is a vitamin). Scientific study has found them to not only be clinically ineffective in treating cancer, but also potentially toxic or lethal when taken by mouth due to cyanide poisoning. The promotion of laetrile to treat cancer has been described in the medical literature as a canonical example of quackery and as "the slickest, most sophisticated, and certainly the most remunerative cancer quack promotion in medical history". It has also been described as traditional Chinese medicine. Chemistry Amygdalin is a cyanogenic glycoside derived from the aromatic amino acid phenylalanine. Amygdalin and prunasin are common among plants of the family Rosaceae, particularly the genus Prunus, Poaceae (grasses), Fabaceae (legumes), and in other food plants, including flaxseed and manioc. Within these plants, amygdalin and the enzymes necessary to hydrolyze it are stored in separate locations, and only mix as a result of tissue damage. This provides a natural defense system. Amygdalin is contained in stone fruit kernels, such as almonds, apricot (14 g/kg), peach (6.8 g/kg), and plum (4–17.5 g/kg depending on variety), and also in the seeds of the apple (3 g/kg). Benzaldehyde released from amygdalin provides a bitter flavor. Because of a difference in a recessive gene called Sweet kernal [Sk], much less amygdalin is present in nonbitter (or sweet) almond than bitter almond. In one study, bitter almond amygdalin concentrations ranged from 33 to 54 g/kg depending on variety; semibitter varieties averaged 1 g/kg and sweet varieties averaged 0.063 g/kg with significant variability based on variety and growing region. For one method of isolating amygdalin, the stones are removed from the fruit and cracked to obtain the kernels, which are dried in the sun or in ovens. The kernels are boiled in ethanol; on evaporation of the solution and the addition of diethyl ether, amygdalin is precipitated as minute white crystals. Natural amygdalin has the (R)-configuration at the chiral phenyl center. Under mild basic conditions, this stereogenic center isomerizes; the (S)-epimer is called neoamygdalin. Although the synthesized version of amygdalin is the (R)-epimer, the stereogenic center attached to the nitrile and phenyl groups easily epimerizes if the manufacturer does not store the compound correctly. Amygdalin is hydrolyzed by intestinal β-glucosidase (emulsin) and amygdalin beta-glucosidase (amygdalase) to give gentiobiose and L-mandelonitrile. Gentiobiose is further hydrolyzed to give glucose, whereas mandelonitrile (the cyanohydrin of benzaldehyde) decomposes to give benzaldehyde and hydrogen cyanide. Hydrogen cyanide in sufficient quantities (allowable daily intake: ~0.6 mg) causes cyanide poisoning which has a fatal oral dose range of 0.6–1.5 mg/kg of body weight. Laetrile Laetrile (patented 1961) is a simpler semisynthetic derivative of amygdalin. Laetrile is synthesized from amygdalin by hydrolysis. The usual preferred commercial source is from apricot kernels (Prunus armeniaca). The name is derived from the separate words "laevorotatory" and "mandelonitrile". Laevorotatory describes the stereochemistry of the molecule, while mandelonitrile refers to the portion of the molecule from which cyanide is released by decomposition. A 500 mg laetrile tablet may contain between 2.5 and 25 mg of hydrogen cyanide. Like amygdalin, laetrile is hydrolyzed in the duodenum (alkaline) and in the intestine (enzymatically) to D-glucuronic acid and L-mandelonitrile; the latter hydrolyzes to benzaldehyde and hydrogen cyanide, that in sufficient quantities causes cyanide poisoning. Claims for laetrile were based on three different hypotheses: The first hypothesis proposed that cancerous cells contained copious beta-glucosidases, which release HCN from laetrile via hydrolysis. Normal cells were reportedly unaffected, because they contained low concentrations of beta-glucosidases and high concentrations of rhodanese, which converts HCN to the less toxic thiocyanate. Later, however, it was shown that both cancerous and normal cells contain only trace amounts of beta-glucosidases and similar amounts of rhodanese. The second proposed that, after ingestion, amygdalin was hydrolyzed to mandelonitrile, transported intact to the liver and converted to a beta-glucuronide complex, which was then carried to the cancerous cells, hydrolyzed by beta-glucuronidases to release mandelonitrile and then HCN. Mandelonitrile, however, dissociates to benzaldehyde and hydrogen cyanide, and cannot be stabilized by glycosylation. Finally, the third asserted that laetrile is the discovered vitamin B-17, and further suggests that cancer is a result of "B-17 deficiency". It postulated that regular dietary administration of this form of laetrile would, therefore, actually prevent all incidences of cancer. There is no evidence supporting this conjecture in the form of a physiologic process, nutritional requirement, or identification of any deficiency syndrome. The term "vitamin B-17" is not recognized by Committee on Nomenclature of the American Institute of Nutrition Vitamins. Ernst T. Krebs (not to be confused with Hans Adolf Krebs, the discoverer of the citric acid cycle) branded laetrile as a vitamin in order to have it classified as a nutritional supplement rather than as a pharmaceutical. History of laetrile Early usage Amygdalin was first isolated in 1830 from bitter almond seeds (Prunus dulcis) by Pierre-Jean Robiquet and Antoine Boutron-Charlard. Liebig and Wöhler found three hydrolysis products of amygdalin: sugar, benzaldehyde, and hydrogen cyanide. Later research showed that sulfuric acid hydrolyzes it into D-glucose, benzaldehyde, and hydrogen cyanide; while hydrochloric acid gives mandelic acid, D-glucose, and ammonia. In 1845 amygdalin was used as a cancer treatment in Russia, and in the 1920s in the United States, but it was considered too poisonous. In the 1950s, a purportedly non-toxic, synthetic form was patented for use as a meat preservative, and later marketed as laetrile for cancer treatment. The U.S. Food and Drug Administration prohibited the interstate shipment of amygdalin and laetrile in 1977. Thereafter, 27 U.S. states legalized the use of amygdalin within those states. Subsequent results In a 1977 controlled, blinded trial, laetrile showed no more activity than placebo. Subsequently, laetrile was tested on 14 tumor systems without evidence of effectiveness. The Memorial Sloan–Kettering Cancer Center (MSKCC) concluded that "laetrile showed no beneficial effects." Mistakes in an earlier MSKCC press release were highlighted by a group of laetrile proponents led by Ralph Moss, former public affairs official of MSKCC who had been fired following his appearance at a press conference accusing the hospital of covering up the benefits of laetrile. These mistakes were considered scientifically inconsequential, but Nicholas Wade in Science stated that "even the appearance of a departure from strict objectivity is unfortunate." The results from these studies were published all together. A 2015 systematic review from the Cochrane Collaboration found: The authors also recommended, on ethical and scientific grounds, that no further clinical research into laetrile or amygdalin be conducted. Subsequent research has confirmed the evidence of harm and lack of benefit. Given the lack of evidence, laetrile has not been approved by the U.S. Food and Drug Administration or the European Commission. The U.S. National Institutes of Health evaluated the evidence separately and concluded that clinical trials of amygdalin showed little or no effect against cancer. For example, a 1982 trial by the Mayo Clinic of 175 patients found that tumor size had increased in all but one patient. The authors reported that "the hazards of amygdalin therapy were evidenced in several patients by symptoms of cyanide toxicity or by blood cyanide levels approaching the lethal range." The study concluded "Patients exposed to this agent should be instructed about the danger of cyanide poisoning, and their blood cyanide levels should be carefully monitored. Amygdalin (Laetrile) is a toxic drug that is not effective as a cancer treatment". Additionally, "No controlled clinical trials (trials that compare groups of patients who receive the new treatment to groups who do not) of laetrile have been reported." The side effects of laetrile treatment are the symptoms of cyanide poisoning. These symptoms include: nausea and vomiting, headache, dizziness, cherry red skin color, liver damage, abnormally low blood pressure, droopy upper eyelid, trouble walking due to damaged nerves, fever, mental confusion, coma, and death. The European Food Safety Agency's Panel on Contaminants in the Food Chain has studied the potential toxicity of the amygdalin in apricot kernels. The Panel reported, "If consumers follow the recommendations of websites that promote consumption of apricot kernels, their exposure to cyanide will greatly exceed" the dose expected to be toxic. The Panel also reported that acute cyanide toxicity had occurred in adults who had consumed 20 or more kernels and that in children "five or more kernels appear to be toxic". Advocacy and legality of laetrile Advocates for laetrile assert that there is a conspiracy between the US Food and Drug Administration, the pharmaceutical industry and the medical community, including the American Medical Association and the American Cancer Society, to exploit the American people, and especially cancer patients. Advocates of the use of laetrile have also changed the rationale for its use, first as a treatment of cancer, then as a vitamin, then as part of a "holistic" nutritional regimen, or as treatment for cancer pain, among others, none of which have any significant evidence supporting its use. Despite the lack of evidence for its use, laetrile developed a significant following due to its wide promotion as a "pain-free" treatment of cancer as an alternative to surgery and chemotherapy that have significant side effects. The use of laetrile led to a number of deaths. The FDA and AMA crackdown, begun in the 1970s, effectively escalated prices on the black market, played into the conspiracy narrative and enabled unscrupulous profiteers to foster multimillion-dollar smuggling empires. Some American cancer patients have traveled to Mexico for treatment with the substance, for example at the Oasis of Hope Hospital in Tijuana. The actor Steve McQueen died in Mexico following surgery to remove a stomach tumor, having previously undergone extended treatment for pleural mesothelioma (a cancer associated with asbestos exposure) under the care of William D. Kelley, a de-licensed dentist and orthodontist who claimed to have devised a cancer treatment involving pancreatic enzymes, 50 daily vitamins and minerals, frequent body shampoos, enemas, and a specific diet as well as laetrile. Laetrile advocates in the United States include Dean Burk, a former chief chemist of the National Cancer Institute cytochemistry laboratory, and national arm wrestling champion Jason Vale, who falsely claimed that his kidney and pancreatic cancers were cured by eating apricot seeds. Vale was convicted in 2004 for, among other things, fraudulently marketing laetrile as a cancer cure. The court also found that Vale had made at least $500,000 from his fraudulent sales of laetrile. In New Zealand, laetrile was among the purported treatments for cancer promoted by Milan Brych, who was later convicted of medical fraud. In the 1970s, court cases in several states challenged the FDA's authority to restrict access to what they claimed are potentially lifesaving drugs. More than twenty states passed laws making the use of laetrile legal. After the unanimous Supreme Court ruling in United States v. Rutherford which established that interstate transport of the compound was illegal, usage fell off dramatically. The US Food and Drug Administration continues to seek jail sentences for vendors marketing laetrile for cancer treatment, calling it a "highly toxic product that has not shown any effect on treating cancer." See also List of ineffective cancer treatments Alternative cancer treatments References External links Laetrile/Amygdalin information from the National Cancer Institute (U.S.A.) Food and Drug Administration Commissioner's Decision on Laetrile The Rise and Fall of Laetrile Alternative cancer treatments Cyanogenic glycosides Plant toxins Health fraud B
Amygdalin
[ "Chemistry" ]
3,064
[ "Chemical ecology", "Plant toxins" ]
3,259
https://en.wikipedia.org/wiki/Amicable%20numbers
In mathematics, the amicable numbers are two different natural numbers related in such a way that the sum of the proper divisors of each is equal to the other number. That is, s(a)=b and s(b)=a, where s(n)=σ(n)-n is equal to the sum of positive divisors of n except n itself (see also divisor function). The smallest pair of amicable numbers is (220, 284). They are amicable because the proper divisors of 220 are 1, 2, 4, 5, 10, 11, 20, 22, 44, 55 and 110, of which the sum is 284; and the proper divisors of 284 are 1, 2, 4, 71 and 142, of which the sum is 220. The first ten amicable pairs are: (220, 284), (1184, 1210), (2620, 2924), (5020, 5564), (6232, 6368), (10744, 10856), (12285, 14595), (17296, 18416), (63020, 76084), and (66928, 66992) . It is unknown if there are infinitely many pairs of amicable numbers. A pair of amicable numbers constitutes an aliquot sequence of period 2. A related concept is that of a perfect number, which is a number that equals the sum of its own proper divisors, in other words a number which forms an aliquot sequence of period 1. Numbers that are members of an aliquot sequence with period greater than 2 are known as sociable numbers. History Amicable numbers were known to the Pythagoreans, who credited them with many mystical properties. A general formula by which some of these numbers could be derived was invented circa 850 by the Iraqi mathematician Thābit ibn Qurra (826–901). Other Arab mathematicians who studied amicable numbers are al-Majriti (died 1007), al-Baghdadi (980–1037), and al-Fārisī (1260–1320). The Iranian mathematician Muhammad Baqir Yazdi (16th century) discovered the pair (9363584, 9437056), though this has often been attributed to Descartes. Much of the work of Eastern mathematicians in this area has been forgotten. Thābit ibn Qurra's formula was rediscovered by Fermat (1601–1665) and Descartes (1596–1650), to whom it is sometimes ascribed, and extended by Euler (1707–1783). It was extended further by Borho in 1972. Fermat and Descartes also rediscovered pairs of amicable numbers known to Arab mathematicians. Euler also discovered dozens of new pairs. The second smallest pair, (1184, 1210), was discovered in 1867 by 16-year-old B. Nicolò I. Paganini (not to be confused with the composer and violinist), having been overlooked by earlier mathematicians. There are over 1 billion known amicable pairs. Rules for generation While these rules do generate some pairs of amicable numbers, many other pairs are known, so these rules are by no means comprehensive. In particular, the two rules below produce only even amicable pairs, so they are of no interest for the open problem of finding amicable pairs coprime to 210 = 2·3·5·7, while over 1000 pairs coprime to 30 = 2·3·5 are known [García, Pedersen & te Riele (2003), Sándor & Crstici (2004)]. Thābit ibn Qurrah theorem The Thābit ibn Qurrah theorem is a method for discovering amicable numbers invented in the 9th century by the Arab mathematician Thābit ibn Qurrah. It states that if where is an integer and are prime numbers, then and are a pair of amicable numbers. This formula gives the pairs for , for , and for , but no other such pairs are known. Numbers of the form are known as Thabit numbers. In order for Ibn Qurrah's formula to produce an amicable pair, two consecutive Thabit numbers must be prime; this severely restricts the possible values of . To establish the theorem, Thâbit ibn Qurra proved nine lemmas divided into two groups. The first three lemmas deal with the determination of the aliquot parts of a natural integer. The second group of lemmas deals more specifically with the formation of perfect, abundant and deficient numbers. Euler's rule Euler's rule is a generalization of the Thâbit ibn Qurra theorem. It states that if where are integers and are prime numbers, then and are a pair of amicable numbers. Thābit ibn Qurra's theorem corresponds to the case . Euler's rule creates additional amicable pairs for with no others being known. Euler (1747 & 1750) overall found 58 new pairs increasing the number of pairs that were then known to 61. Regular pairs Let (, ) be a pair of amicable numbers with , and write and where is the greatest common divisor of and . If and are both coprime to and square free then the pair (, ) is said to be regular ; otherwise, it is called irregular or exotic. If (, ) is regular and and have and prime factors respectively, then is said to be of type . For example, with , the greatest common divisor is and so and . Therefore, is regular of type . Twin amicable pairs An amicable pair is twin if there are no integers between and belonging to any other amicable pair . Other results In every known case, the numbers of a pair are either both even or both odd. It is not known whether an even-odd pair of amicable numbers exists, but if it does, the even number must either be a square number or twice one, and the odd number must be a square number. However, amicable numbers where the two members have different smallest prime factors do exist: there are seven such pairs known. Also, every known pair shares at least one common prime factor. It is not known whether a pair of coprime amicable numbers exists, though if any does, the product of the two must be greater than 1065. Also, a pair of co-prime amicable numbers cannot be generated by Thabit's formula (above), nor by any similar formula. In 1955 Paul Erdős showed that the density of amicable numbers, relative to the positive integers, was 0. In 1968 Martin Gardner noted that most even amicable pairs sumsdivisible by 9, and that a rule for characterizing the exceptions was obtained. According to the sum of amicable pairs conjecture, as the number of the amicable numbers approaches infinity, the percentage of the sums of the amicable pairs divisible by ten approaches 100% . Although all amicable pairs up to 10,000 are even pairs, the proportion of odd amicable pairs increases steadily towards higher numbers, and presumably there are more of them than of the even amicable pairs (A360054 in OEIS). Gaussian integer amicable pairs exist, e.g. s(8008+3960i) = 4232-8280i and s(4232-8280i) = 8008+3960i. Generalizations Amicable tuples Amicable numbers satisfy and which can be written together as . This can be generalized to larger tuples, say , where we require For example, (1980, 2016, 2556) is an amicable triple , and (3270960, 3361680, 3461040, 3834000) is an amicable quadruple . Amicable multisets are defined analogously and generalizes this a bit further . Sociable numbers Sociable numbers are the numbers in cyclic lists of numbers (with a length greater than 2) where each number is the sum of the proper divisors of the preceding number. For example, are sociable numbers of order 4. Searching for sociable numbers The aliquot sequence can be represented as a directed graph, , for a given integer , where denotes the sum of the proper divisors of . Cycles in represent sociable numbers within the interval . Two special cases are loops that represent perfect numbers and cycles of length two that represent amicable pairs. References in popular culture Amicable numbers are featured in the novel The Housekeeper and the Professor by Yōko Ogawa, and in the Japanese film based on it. Paul Auster's collection of short stories entitled True Tales of American Life contains a story ('Mathematical Aphrodisiac' by Alex Galt) in which amicable numbers play an important role. Amicable numbers are featured briefly in the novel The Stranger House by Reginald Hill. Amicable numbers are mentioned in the French novel The Parrot's Theorem by Denis Guedj. Amicable numbers are mentioned in the JRPG Persona 4 Golden. Amicable numbers are featured in the visual novel Rewrite. Amicable numbers (220, 284) are referenced in episode 13 of the 2017 Korean drama Andante. Amicable numbers are featured in the Greek movie The Other Me (2016 film). Amicable numbers are discussed in the book Are Numbers Real? by Brian Clegg. Amicable numbers are mentioned in the 2020 novel Apeirogon by Colum McCann. See also Betrothed numbers (quasi-amicable numbers) Amicable triple - Three-number variation of Amicable numbers. Notes References External links (database of all known amicable numbers) Arithmetic dynamics Divisor function Integer sequences
Amicable numbers
[ "Mathematics" ]
2,012
[ "Sequences and series", "Integer sequences", "Mathematical structures", "Recreational mathematics", "Mathematical objects", "Arithmetic dynamics", "Combinatorics", "Numbers", "Number theory", "Dynamical systems" ]
3,262
https://en.wikipedia.org/wiki/Agar
Agar ( or ), or agar-agar, is a jelly-like substance consisting of polysaccharides obtained from the cell walls of some species of red algae, primarily from “ogonori” and “tengusa”. As found in nature, agar is a mixture of two components, the linear polysaccharide agarose and a heterogeneous mixture of smaller molecules called agaropectin. It forms the supporting structure in the cell walls of certain species of algae and is released on boiling. These algae are known as agarophytes, belonging to the Rhodophyta (red algae) phylum. The processing of food-grade agar removes the agaropectin, and the commercial product is essentially pure agarose. Agar has been used as an ingredient in desserts throughout Asia and also as a solid substrate to contain culture media for microbiological work. Agar can be used as a laxative; an appetite suppressant; a vegan substitute for gelatin; a thickener for soups; in fruit preserves, ice cream, and other desserts; as a clarifying agent in brewing; and for sizing paper and fabrics. Etymology The word agar comes from agar-agar, the Malay name for red algae (Gigartina, Eucheuma, Gracilaria) from which the jelly is produced. It is also known as Kanten () (from the phrase kan-zarashi tokoroten () or "cold-exposed agar"), Japanese isinglass, China grass, Ceylon moss or Jaffna moss. Gracilaria edulis or its synonym G. lichenoides is specifically referred to as agal-agal or Ceylon agar. History Macroalgae have been used widely as food by coastal cultures, especially in Southeast Asia. In the Philippines, Gracilaria, known as gulaman (also guraman, gar-garao, or gulaman dagat, among other names) in Tagalog, have been harvested and used as food for centuries, eaten both fresh or sun-dried and turned into jellies. The earliest historical attestation is from the Vocabulario de la lengua tagala (1754) by the Jesuit priests Juan de Noceda and Pedro de Sanlucar, where golaman or gulaman was defined as "una yerva, de que se haze conserva a modo de Halea, naze en la mar" ("a herb, from which a jam-like preserve is made, grows in the sea"), with an additional entry for guinolaman to refer to food made with the jelly. Carrageenan, derived from gusô (Eucheuma spp.), which also congeals into a gel-like texture is also used similarly among the Visayan peoples and have been recorded in the even earlier Diccionario De La Lengua Bisaya, Hiligueina y Haraia de la isla de Panay y Sugbu y para las demas islas (c.1637) of the Augustinian missionary Alonso de Méntrida . In the book, Méntrida describes gusô as being cooked until it melts, and then allowed to congeal into a sour dish. In Ambon Island in the Maluku Islands of Indonesia, agar is extracted from Graciliaria and eaten as a type of pickle or a sauce. Jelly seaweeds were also favoured and foraged by Malay communities living on the coasts of the Riau Archipelago and Singapore in Southeast Asia for centuries. 19th century records indicate that dried Graciliaria were one of the bulk exports of British Malaya to China. Poultices made from agar were also used for swollen knee joints and sores in Johore and Singapore. The application of agar as a food additive in Japan is alleged to have been discovered in 1658 by Mino Tarōzaemon (), an innkeeper in current Fushimi-ku, Kyoto who, according to legend, was said to have discarded surplus seaweed soup (Tokoroten) and noticed that it gelled later after a winter night's freezing. Agar was first subjected to chemical analysis in 1859 by the French chemist Anselme Payen, who had obtained agar from the marine algae Gelidium corneum. Beginning in the late 19th century, agar began to be used as a solid medium for growing various microbes. Agar was first described for use in microbiology in 1882 by the German microbiologist Walther Hesse, an assistant working in Robert Koch's laboratory, on the suggestion of his wife Fanny Hesse. Agar quickly supplanted gelatin as the base of microbiological media, due to its higher melting temperature, allowing microbes to be grown at higher temperatures without the media liquefying. With its newfound use in microbiology, agar production quickly increased. This production centered on Japan, which produced most of the world's agar until World War II. However, with the outbreak of World War II, many nations were forced to establish domestic agar industries in order to continue microbiological research. Around the time of World War II, approximately 2,500 tons of agar were produced annually. By the mid-1970s, production worldwide had increased dramatically to approximately 10,000 tons each year. Since then, production of agar has fluctuated due to unstable and sometimes over-utilized seaweed populations. Chemical composition Agar consists of a mixture of two polysaccharides: agarose and agaropectin, with agarose making up about 70% of the mixture, while agaropectin makes about 30% of it. Agarose is a linear polymer, made up of repeating units of agarobiose, a disaccharide made up of D-galactose and 3,6-anhydro-L-galactopyranose. Agaropectin is a heterogeneous mixture of smaller molecules that occur in lesser amounts, and is made up of alternating units of D-galactose and L-galactose heavily modified with acidic side-groups, such as sulfate, glucuronate, and pyruvate. Physical properties Agar exhibits a phenomenon known as hysteresis whereby, when mixed with water, it solidifies and forms a gel below about , which is called the gel point, and melts above , which is the melting point. Hysteresis is the property of having a difference between the gel point and melting point temperatures. This property lends a suitable balance between easy melting and good gel stability at relatively high temperatures. Since many scientific applications require incubation at temperatures close to human body temperature (37 °C), agar is more appropriate than other solidifying agents that melt at this temperature, such as gelatin. Uses Culinary Agar-agar is a natural vegetable gelatin counterpart. It is white and semi-translucent when sold in packages as washed and dried strips or in powdered form. It can be used to make jellies, puddings, and custards. When making jelly, it is boiled in water until the solids dissolve. Sweetener, flavoring, coloring, fruits and or vegetables are then added, and the liquid is poured into molds to be served as desserts and vegetable aspics or incorporated with other desserts such as a layer of jelly in a cake. Agar-agar is approximately 80% dietary fiber, so it can serve as an intestinal regulator. Its bulking quality has been behind fad diets in Asia, for example the kanten (the Japanese word for agar-agar) diet. Once ingested, kanten triples in size and absorbs water. This results in the consumers feeling fuller. Asian culinary One use of agar in Japanese cuisine is in anmitsu, a dessert made of small cubes of agar jelly and served in a bowl with various fruits or other ingredients. It is also the main ingredient in mizu yōkan, another popular Japanese food. In Philippine cuisine, it is used to make the jelly bars in the various gulaman refreshments like sago't gulaman, samalamig, or desserts such as buko pandan, agar flan, halo-halo, fruit cocktail jelly, and the black and red gulaman used in various fruit salads. In Vietnamese cuisine, jellies made of flavored layers of agar-agar, called thạch, are a popular dessert, and are often made in ornate molds for special occasions. In Indian cuisine, agar is used for making desserts. In Burmese cuisine, a sweet jelly known as kyauk kyaw is made from agar. Agar jelly is widely used in Taiwanese bubble tea. Other culinary It can be used as addition to (or as a replacement for) pectin in jelly, jam, or marmalade, as a substitute to gelatin for its superior gelling properties, and as a strengthening ingredient in souffles and custards. Another use of agar-agar is in a Russian dish ptich'ye moloko (bird's milk), a rich jellified custard (or soft meringue) used as a cake filling or chocolate-glazed as individual sweets. Agar-agar may also be used as the gelling agent in gel clarification, a culinary technique used to clarify stocks, sauces, and other liquids. Mexico has traditional candies made out of Agar gelatin, most of them in colorful, half-circle shapes that resemble a melon or watermelon fruit slice, and commonly covered with sugar. They are known in Spanish as Dulce de Agar (Agar sweets) Agar-agar is an allowed nonorganic/nonsynthetic additive used as a thickener, gelling agent, texturizer, moisturizer, emulsifier, flavor enhancer, and absorbent in certified organic foods. Microbiology Agar plate An agar plate or Petri dish is used to provide a growth medium using a mix of agar and other nutrients in which microorganisms, including bacteria and fungi, can be cultured and observed under the microscope. Agar is indigestible for many organisms so that microbial growth does not affect the gel used and it remains stable. Agar is typically sold commercially as a powder that can be mixed with water and prepared similarly to gelatin before use as a growth medium. Nutrients are typically added to meet the nutritional needs of the microbes organism, the formulations of which may be "undefined" where the precise composition is unknown, or "defined" where the exact chemical composition is known. Agar is often dispensed using a sterile media dispenser. Different algae produce various types of agar. Each agar has unique properties that suit different purposes. Because of the agarose component, the agar solidifies. When heated, agarose has the potential to melt and then solidify. Because of this property, they are referred to as "physical gels". In contrast, polyacrylamide polymerization is an irreversible process, and the resulting products are known as chemical gels. There are a variety of different types of agar that support the growth of different microorganisms. A nutrient agar may be permissive, allowing for the cultivation of any non-fastidious microorganisms; a commonly-used nutrient agar for bacteria is the Luria Bertani (LB) agar which contains lysogeny broth, a nutrient-rich medium used for bacterial growth. Additionally, 2216 Marine Broth (MB) agar, with high salt content, is optimized for growing heterotrophic marine bacteria like those of the Vibrio genus, while Terrific Broth (TB) agar is used to non-selectively culture high yields of the bacterium E. coli. More generally, enriched media is an agar variety that is infused with the necessary nutrients required by fastidious organisms to grow. Despite the large diversity of agar mediums, yeast extract is a common ingredient across all varieties as it is a macronutrient that provides a nitrogen source for all bacterial cell types. Other fastidious organisms may require the addition of different biological fluids such as horse or sheep blood, serum, egg yolk, and so on. Agar plates can also be selective, and can be used to promote the growth of bacteria of interest while inhibiting others. A variety of chemicals may be added to create an environment favourable for specific types of bacteria or bacteria with certain properties, but not conducive for growth of others. For example, antibiotics may be added in cloning experiments whereby bacteria with antibiotic-resistant plasmid are selected. In addition to antibiotic treated agar, other selective and indicator agar plates include TCBS agar and MacConkey agar. Thiosulfate citrate bile salts sucrose (TCBS) agar is used to differentiate Vibrio species based on their sucrose metabolism, since only some will metabolize the sucrose in the plate and change its pH. Indicator dyes included in the gel will display a visual change of the pH by changing the gel color from green to yellow. MacConkey agar contains bile salts and crystal violet to selectively grow gram-negative bacteria and differentiate between species using pH-indicator dyes that demonstrate lactose metabolism properties. Motility assays As a gel, an agar or agarose medium is porous and therefore can be used to measure microorganism motility and mobility. The gel's porosity is directly related to the concentration of agarose in the medium, so various levels of effective viscosity (from the cell's "point of view") can be selected, depending on the experimental objectives. A common identification assay involves culturing a sample of the organism deep within a block of nutrient agar. Cells will attempt to grow within the gel structure. Motile species will be able to migrate, albeit slowly, throughout the gel, and infiltration rates can then be visualized, whereas non-motile species will show growth only along the now-empty path introduced by the invasive initial sample deposition. Another setup commonly used for measuring chemotaxis and chemokinesis utilizes the under-agarose cell migration assay, whereby a layer of agarose gel is placed between a cell population and a chemoattractant. As a concentration gradient develops from the diffusion of the chemoattractant into the gel, various cell populations requiring different stimulation levels to migrate can then be visualized over time using microphotography as they tunnel upward through the gel against gravity along the gradient. Plant biology Research grade agar is used extensively in plant biology as it is optionally supplemented with a nutrient and/or vitamin mixture that allows for seedling germination in Petri dishes under sterile conditions (given that the seeds are sterilized as well). Nutrient and/or vitamin supplementation for Arabidopsis thaliana is standard across most experimental conditions. Murashige & Skoog (MS) nutrient mix and Gamborg's B5 vitamin mix in general are used. A 1.0% agar/0.44% MS+vitamin dH2O solution is suitable for growth media between normal growth temps. When using agar, within any growth medium, it is important to know that the solidification of the agar is pH-dependent. The optimal range for solidification is between 5.4 and 5.7. Usually, the application of potassium hydroxide is needed to increase the pH to this range. A general guideline is about 600 μl 0.1M KOH per 250 ml GM. This entire mixture can be sterilized using the liquid cycle of an autoclave. This medium nicely lends itself to the application of specific concentrations of phytohormones etc. to induce specific growth patterns in that one can easily prepare a solution containing the desired amount of hormone, add it to the known volume of GM, and autoclave to both sterilize and evaporate off any solvent that may have been used to dissolve the often-polar hormones. This hormone/GM solution can be spread across the surface of Petri dishes sown with germinated and/or etiolated seedlings. Experiments with the moss Physcomitrella patens, however, have shown that choice of the gelling agent – agar or Gelrite – does influence phytohormone sensitivity of the plant cell culture. Other uses Agar is used: As an impression material in dentistry. As a medium to precisely orient the tissue specimen and secure it by agar pre-embedding (especially useful for small endoscopy biopsy specimens) for histopathology processing To make salt bridges and gel plugs for use in electrochemistry. In formicariums as a transparent substitute for sand and a source of nutrition. As a natural ingredient in forming modeling clay for young children to play with. As an allowed biofertilizer component in organic farming. As a substrate for precipitin reactions in immunology. At different times as a substitute for gelatin in photographic emulsions, arrowroot in preparing silver paper and as a substitute for fish glue in resist etching. As an MRI elastic gel phantom to mimic tissue mechanical properties in Magnetic Resonance Elastography Gelidium agar is used primarily for bacteriological plates. Gracilaria agar is used mainly in food applications. In 2016, AMAM, a Japanese company, developed a prototype for Agar-based commercial packaging system called Agar Plasticity, intended as a replacement for oil-based plastic packaging. See also References External links Edible thickening agents Microbiological gelling agent Dental materials Algal food ingredients Red algae Gels Polysaccharides Japanese inventions Food stabilizers Jams and jellies E-number additives Impression material
Agar
[ "Physics", "Chemistry", "Biology" ]
3,788
[ "Red algae", "Dental materials", "Carbohydrates", "Algae", "Colloids", "Materials", "Gels", "Matter", "Polysaccharides" ]
3,263
https://en.wikipedia.org/wiki/Acid%20rain
Acid rain is rain or any other form of precipitation that is unusually acidic, meaning that it has elevated levels of hydrogen ions (low pH). Most water, including drinking water, has a neutral pH that exists between 6.5 and 8.5, but acid rain has a pH level lower than this and ranges from 4–5 on average. The more acidic the acid rain is, the lower its pH is. Acid rain can have harmful effects on plants, aquatic animals, and infrastructure. Acid rain is caused by emissions of sulfur dioxide and nitrogen oxide, which react with the water molecules in the atmosphere to produce acids. Acid rain has been shown to have adverse impacts on forests, freshwaters, soils, microbes, insects and aquatic life-forms. In ecosystems, persistent acid rain reduces tree bark durability, leaving flora more susceptible to environmental stressors such as drought, heat/cold and pest infestation. Acid rain is also capable of detrimenting soil composition by stripping it of nutrients such as calcium and magnesium which play a role in plant growth and maintaining healthy soil. In terms of human infrastructure, acid rain also causes paint to peel, corrosion of steel structures such as bridges, and weathering of stone buildings and statues as well as having impacts on human health. Some governments, including those in Europe and North America, have made efforts since the 1970s to reduce the release of sulfur dioxide and nitrogen oxide into the atmosphere through air pollution regulations. These efforts have had positive results due to the widespread research on acid rain starting in the 1960s and the publicized information on its harmful effects. The main source of sulfur and nitrogen compounds that result in acid rain are anthropogenic, but nitrogen oxides can also be produced naturally by lightning strikes and sulfur dioxide is produced by volcanic eruptions. Definition "Acid rain" is rain with a pH less than 5. "Clean" or unpolluted rain has a pH greater than 5 but still less than pH = 7 owing to the acidity caused by carbon dioxide acid according to the following reactions: A variety of natural and human-made sources contribute to the acidity. For example nitric acid produced by electric discharge in the atmosphere such as lightning. The usual anthropogenic sources are sulfur dioxide and nitrogen oxide. They react with water (as does carbon dioxide) to give solutions with pH < 5. Occasional pH readings in rain and fog water of well below 2.4 have been reported in industrialized areas. History Acid rain was first systematically studied in Europe in the 1960s and in the United States and Canada in the following decade. In Europe The corrosive effect of polluted, acidic city air on limestone and marble was noted in the 17th century by John Evelyn, who remarked upon the poor condition of the Arundel marbles. Since the Industrial Revolution, emissions of sulfur dioxide and nitrogen oxides into the atmosphere have increased. In 1852, Robert Angus Smith was the first to show the relationship between acid rain and atmospheric pollution in Manchester, England. Smith coined the term "acid rain" in 1872. In the late 1960s, scientists began widely observing and studying the phenomenon. At first, the main focus in this research lay on local effects of acid rain. Waldemar Christofer Brøgger was the first to acknowledge long-distance transportation of pollutants crossing borders from the United Kingdom to Norway – a problem systematically studied by Brynjulf Ottar in the 1970s. Ottar's work was strongly influenced by Swedish soil scientist Svante Odén, who had drawn widespread attention to Europe's acid rain problem in popular newspapers and wrote a landmark paper on the subject in 1968. In the United States The earliest report about acid rain in the United States came from chemical evidence gathered from Hubbard Brook Valley; public awareness of acid rain in the US increased in the 1970s after The New York Times reported on these findings. In 1972, a group of scientists, including Gene Likens, discovered the rain that was deposited at White Mountains of New Hampshire was acidic. The pH of the sample was measured to be 4.03 at Hubbard Brook. The Hubbard Brook Ecosystem Study followed up with a series of research studies that analyzed the environmental effects of acid rain. The alumina from soils neutralized acid rain that mixed with stream water at Hubbard Brook. The result of this research indicated that the chemical reaction between acid rain and aluminium leads to an increasing rate of soil weathering. Experimental research examined the effects of increased acidity in streams on ecological species. In 1980, scientists modified the acidity of Norris Brook, New Hampshire, and observed the change in species' behaviors. There was a decrease in species diversity, an increase in community dominants, and a reduction in the food web complexity. In 1980, the US Congress passed an Acid Deposition Act. This Act established an 18-year assessment and research program under the direction of the National Acidic Precipitation Assessment Program (NAPAP). NAPAP enlarged a network of monitoring sites to determine how acidic precipitation was, seeking to determine long-term trends, and established a network for dry deposition. Using a statistically based sampling design, NAPAP quantified the effects of acid rain on a regional basis by targeting research and surveys to identify and quantify the impact of acid precipitation on freshwater and terrestrial ecosystems. NAPAP also assessed the effects of acid rain on historical buildings, monuments, and building materials. It also funded extensive studies on atmospheric processes and potential control programs. From the start, policy advocates from all sides attempted to influence NAPAP activities to support their particular policy advocacy efforts, or to disparage those of their opponents. For the US Government's scientific enterprise, a significant impact of NAPAP were lessons learned in the assessment process and in environmental research management to a relatively large group of scientists, program managers, and the public. In 1981, the National Academy of Sciences was looking into research about the controversial issues regarding acid rain. President Ronald Reagan dismissed the issues of acid rain until his personal visit to Canada and confirmed that the Canadian border suffered from the drifting pollution from smokestacks originating in the US Midwest. Reagan honored the agreement to Canadian Prime Minister Pierre Trudeau's enforcement of anti-pollution regulation. In 1982, Reagan commissioned William Nierenberg to serve on the National Science Board. Nierenberg selected scientists including Gene Likens to serve on a panel to draft a report on acid rain. In 1983, the panel of scientists came up with a draft report, which concluded that acid rain is a real problem and solutions should be sought. White House Office of Science and Technology Policy reviewed the draft report and sent Fred Singer's suggestions of the report, which cast doubt on the cause of acid rain. The panelists revealed rejections against Singer's positions and submitted the report to Nierenberg in April. In May 1983, the House of Representatives voted against legislation controlling sulfur emissions. There was a debate about whether Nierenberg delayed the release of the report. Nierenberg denied the saying about his suppression of the report and stated that it was withheld after the House's vote because it was not ready to be published. In 1991, the US National Acid Precipitation Assessment Program (NAPAP) provided its first assessment of acid rain in the United States. It reported that 5% of New England Lakes were acidic, with sulfates being the most common problem. They noted that 2% of the lakes could no longer support Brook Trout, and 6% of the lakes were unsuitable for the survival of many minnow species. Subsequent Reports to Congress have documented chemical changes in soil and freshwater ecosystems, nitrogen saturation, soil nutrient decreases, episodic acidification, regional haze, and damage to historical monuments. Meanwhile, in 1990, the US Congress passed a series of amendments to the Clean Air Act. Title IV of these amendments established a cap and trade system designed to control emissions of sulfur dioxide and nitrogen oxides. Both these emissions proved to cause a significant problem for U.S. citizens and their access to healthy, clean air. Title IV called for a total reduction of about 10 million tons of SO2 emissions from power plants, close to a 50% reduction. It was implemented in two phases. Phase I began in 1995 and limited sulfur dioxide emissions from 110 of the largest power plants to 8.7 million tons of sulfur dioxide. One power plant in New England (Merrimack) was in Phase I. Four other plants (Newington, Mount Tom, Brayton Point, and Salem Harbor) were added under other program provisions. Phase II began in 2000 and affects most of the power plants in the country. During the 1990s, research continued. On March 10, 2005, the EPA issued the Clean Air Interstate Rule (CAIR). This rule provides states with a solution to the problem of power plant pollution that drifts from one state to another. CAIR will permanently cap emissions of SO2 and NOx in the eastern United States. When fully implemented, CAIR will reduce SO2 emissions in 28 eastern states and the District of Columbia by over 70% and NOx emissions by over 60% from 2003 levels. Overall, the program's cap and trade program has been successful in achieving its goals. Since the 1990s, SO2 emissions have dropped 40%, and according to the Pacific Research Institute, acid rain levels have dropped 65% since 1976. Conventional regulation was used in the European Union, which saw a decrease of over 70% in SO2 emissions during the same period. In 2007, total SO2 emissions were 8.9 million tons, achieving the program's long-term goal ahead of the 2010 statutory deadline. In 2007 the EPA estimated that by 2010, the overall costs of complying with the program for businesses and consumers would be $1 billion to $2 billion a year, only one-fourth of what was initially predicted. Forbes says: "In 2010, by which time the cap and trade system had been augmented by the George W. Bush administration's Clean Air Interstate Rule, SO2 emissions had fallen to 5.1 million tons." The term citizen science can be traced back as far as January 1989 to a campaign by the Audubon Society to measure acid rain. Scientist Muki Haklay cites in a policy report for the Wilson Center entitled 'Citizen Science and Policy: A European Perspective' a first use of the term 'citizen science' by R. Kerson in the magazine MIT Technology Review from January 1989. Quoting from the Wilson Center report: "The new form of engagement in science received the name "citizen science". The first recorded example of using the term is from 1989, describing how 225 volunteers across the US collected rain samples to assist the Audubon Society in an acid-rain awareness-raising campaign. The volunteers collected samples, checked for acidity, and reported to the organization. The information was then used to demonstrate the full extent of the phenomenon." In Canada Canadian Harold Harvey was among the first to research a "dead" lake. In 1971, he and R. J. Beamish published a report, "Acidification of the La Cloche Mountain Lakes", documenting the gradual deterioration of fish stocks in 60 lakes in Killarney Park in Ontario, which they had been studying systematically since 1966. In the 1970s and 80s, acid rain was a major topic of research at the Experimental Lakes Area (ELA) in Northwestern Ontario, Canada. Researchers added sulfuric acid to whole lakes in controlled ecosystem experiments to simulate the effects of acid rain. Because its remote conditions allowed for whole-ecosystem experiments, research at the ELA showed that the effect of acid rain on fish populations started at concentrations much lower than those observed in laboratory experiments. In the context of a food web, fish populations crashed earlier than when acid rain had direct toxic effects to the fish because the acidity led to crashes in prey populations (e.g. mysids). As experimental acid inputs were reduced, fish populations and lake ecosystems recovered at least partially, although invertebrate populations have still not completely returned to the baseline conditions. This research showed both that acidification was linked to declining fish populations and that the effects could be reversed if sulfuric acid emissions decreased, and influenced policy in Canada and the United States. In 1985, seven Canadian provinces (all except British Columbia, Alberta, and Saskatchewan) and the federal government signed the Eastern Canada Acid Rain Program. The provinces agreed to limit their combined sulfur dioxide emissions to 2.3 million tonnes by 1994. The Canada-US Air Quality Agreement was signed in 1991. In 1998, all federal, provincial, and territorial Ministers of Energy and Environment signed The Canada-Wide Acid Rain Strategy for Post-2000, which was designed to protect lakes that are more sensitive than those protected by earlier policies. In India Increased risk might be posed by the expected rise in total sulphur emissions from 4,400 kilotonnes (kt) in 1990 to 6,500 kt in 2000, 10,900 kt in 2010 and 18,500 in 2020. Emissions of chemicals leading to acidification The most important gas which leads to acidification is sulfur dioxide. Emissions of nitrogen oxides which are oxidized to form nitric acid are of increasing importance due to stricter controls on emissions of sulfur compounds. 70 Tg(S) per year in the form of SO2 comes from fossil fuel combustion and industry, 2.8 Tg(S) from wildfires, and 7–8 Tg(S) per year from volcanoes. Natural phenomena The principal natural phenomena that contribute acid-producing gases to the atmosphere are emissions from volcanoes. Thus, for example, fumaroles from the Laguna Caliente crater of Poás Volcano create extremely high amounts of acid rain and fog, with acidity as high as a pH of 2, clearing an area of any vegetation and frequently causing irritation to the eyes and lungs of inhabitants in nearby settlements. Acid-producing gasses are also created by biological processes that occur on the land, in wetlands, and in the oceans. The major biological source of sulfur compounds is dimethyl sulfide. Nitric acid in rainwater is an important source of fixed nitrogen for plant life, and is also produced by electrical activity in the atmosphere such as lightning. Acidic deposits have been detected in glacial ice thousands of years old in remote parts of the globe. Human activity The principal cause of acid rain is sulfur and nitrogen compounds from human sources, such as electricity generation, animal agriculture, factories, and motor vehicles. These also include power plants, which use electric power generators that account for a quarter of nitrogen oxides and two-thirds of sulfur dioxide within the atmosphere. Industrial acid rain is a substantial problem in China and Russia and areas downwind from them. These areas all burn sulfur-containing coal to generate heat and electricity. The problem of acid rain has not only increased with population and industrial growth, but has become more widespread. The use of tall smokestacks to reduce local pollution has contributed to the spread of acid rain by releasing gases into regional atmospheric circulation; dispersal from these taller stacks causes pollutants to be carried farther, causing widespread ecological damage. Often deposition occurs a considerable distance downwind of the emissions, with mountainous regions tending to receive the greatest deposition (because of their higher rainfall). An example of this effect is the low pH of rain which falls in Scandinavia. Regarding low pH and pH imbalances in correlation to acid rain, low levels, or those under the pH value of 7, are considered acidic. Acid rain falls at a pH value of roughly 4, making it harmful to consume for humans. When these low pH levels fall in specific regions, they not only affect the environment but also human health. With acidic pH levels in humans comes hair loss, low urinary pH, severe mineral imbalances, constipation, and many cases of chronic disorders like Fibromyalgia and Basal Carcinoma. Chemical process Combustion of fuels and smelting of some ores produce sulfur dioxide and nitric oxides. They are converted into sulfuric acid and nitric acid. In the gas phase sulfur dioxide is oxidized to sulfuric acid: Nitrogen dioxide reacts with hydroxyl radicals to form nitric acid: NO2 + OH· → HNO3 The detailed mechanisms depend on the presence water and traces of iron and manganese. A number of oxidants are capable of these reactions aside from O2, these include ozone, hydrogen peroxide, and oxygen. Acid deposition Wet deposition Wet deposition of acids occurs when any form of precipitation (rain, snow, and so on) removes acids from the atmosphere and delivers it to the Earth's surface. This can result from the deposition of acids produced in the raindrops (see aqueous phase chemistry above) or by the precipitation removing the acids either in clouds or below clouds. Wet removal of both gases and aerosols are both of importance for wet deposition. Dry deposition Acid deposition also occurs via dry deposition in the absence of precipitation. This can be responsible for as much as 20 to 60% of total acid deposition. This occurs when particles and gases stick to the ground, plants or other surfaces. Adverse effects Acid rain has been shown to have adverse impacts on forests, freshwaters and soils, killing insect and aquatic life-forms as well as causing damage to buildings and having impacts on human health. Surface waters and aquatic animals Sulfuric acid and nitric acid have multiple impacts on aquatic ecosystems, including acidification, increased nitrogen and aluminum content, and alteration of biogeochemical processes. Both the lower pH and higher aluminium concentrations in surface water that occur as a result of acid rain can cause damage to fish and other aquatic animals. At pH lower than 5 most fish eggs will not hatch and lower pH can kill adult fish. As lakes and rivers become more acidic, biodiversity is reduced. Acid rain has eliminated insect life and some fish species, including the brook trout in some lakes, streams, and creeks in geographically sensitive areas, such as the Adirondack Mountains of the United States. However, the extent to which acid rain contributes directly or indirectly via runoff from the catchment to lake and river acidity (i.e., depending on characteristics of the surrounding watershed) is variable. The United States Environmental Protection Agency's (EPA) website states: "Of the lakes and streams surveyed, acid rain caused acidity in 75% of the acidic lakes and about 50% of the acidic streams". Lakes hosted by silicate basement rocks are more acidic than lakes within limestone or other basement rocks with a carbonate composition (i.e. marble) due to buffering effects by carbonate minerals, even with the same amount of acid rain. Soils Soil biology and chemistry can be seriously damaged by acid rain. Some microbes are unable to tolerate changes to low pH and are killed. The enzymes of these microbes are denatured (changed in shape so they no longer function) by the acid. The hydronium ions of acid rain also mobilize toxins, such as aluminium, and leach away essential nutrients and minerals such as magnesium. 2 H+ (aq) + Mg2+ (clay) 2 H+ (clay) + Mg2+ (aq) Soil chemistry can be dramatically changed when base cations, such as calcium and magnesium, are leached by acid rain, thereby affecting sensitive species, such as sugar maple (Acer saccharum). Soil acidification Impacts of acidic water and soil acidification on plants could be minor or in most cases major. Most minor cases which do not result in fatality of plant life can be attributed to the plants being less susceptible to acidic conditions and/or the acid rain being less potent. However, even in minor cases, the plant will eventually die due to the acidic water lowering the plant's natural pH. Acidic water enters the plant and causes important plant minerals to dissolve and get carried away; which ultimately causes the plant to die of lack of minerals for nutrition. In major cases, which are more extreme, the same process of damage occurs as in minor cases, which is removal of essential minerals, but at a much quicker rate. Likewise, acid rain that falls on soil and on plant leaves causes drying of the waxy leaf cuticle, which ultimately causes rapid water loss from the plant to the outside atmosphere and eventually results in death of the plant. Soil acidification can lead to a decline in soil microbes as a result of a change in pH, which would have an adverse effect on plants due to their dependence on soil microbes to access nutrients. To see if a plant is being affected by soil acidification, one can closely observe the plant leaves. If the leaves are green and look healthy, the soil pH is normal and acceptable for plant life. But if the plant leaves have yellowing between the veins on their leaves, that means the plant is suffering from acidification and is unhealthy. Moreover, a plant suffering from soil acidification cannot photosynthesize; the acid-water-induced process of drying out of the plant can destroy chloroplast organelles. Without being able to photosynthesize, a plant cannot create nutrients for its own survival or oxygen for the survival of aerobic organisms, which affects most species on Earth and ultimately ends the purpose of the plant's existence. Forests and other vegetation Adverse effects may be indirectly related to acid rain, like the acid's effects on soil (see above) or high concentration of gaseous precursors to acid rain. High altitude forests are especially vulnerable as they are often surrounded by clouds and fog which are more acidic than rain. Plants are capable of adapting to acid rain. On Jinyun Mountain, Chongqing, plant species were seen adapting to new environmental conditions. The affects on the species ranged from being beneficial to detrimental. With natural rainfall or mild acid rainfall, the biochemical and physiological characteristics of plant seedlings were enhanced. Once the pH increases reaches the threshold of 3.5, the acid rain can no longer be beneficial and begins to have negative affects. Acid rain can negatively impact photosynthesis in plant leaves, when leaves are exposed to a lower pH, photosynthesis is impacted due to the decline in chlorophyll. Acid rain also has the ability to cause deformation to leaves at a cellular level, examples include; tissue scaring and changes to the stomatal, epidermis and mesophyll cells. Additional impacts of acid rain includes a decline in cuticle thickness present on the leaf surface. Because acid rain damages leaves, this directly impacts a plants ability to have a strong canopy cover, a decline in canopy cover can lead plants to be more vulnerable to diseases. Dead or dying trees often appear in areas impacted by acid rain. Acid rain causes aluminum to leach from the soil, posing risks to both plant and animal life. Furthermore, it strips the soil of critical minerals and nutrients necessary for tree growth. At higher altitudes, acidic fog and clouds can deplete nutrients from tree foliage, leading to discolored or dead leaves and needles. This depletion compromises the trees' ability to absorb sunlight, weakening them and diminishing their capacity to endure cold conditions. Other plants can also be damaged by acid rain, but the effect on food crops is minimized by the application of lime and fertilizers to replace lost nutrients. In cultivated areas, limestone may also be added to increase the ability of the soil to keep the pH stable, but this tactic is largely unusable in the case of wilderness lands. When calcium is leached from the needles of red spruce, these trees become less cold tolerant and exhibit winter injury and even death. Acid rain may also affect crop productivity by necrosis or changes to soil nutrients, which ultimately prevent plants from reaching maturity. Ocean acidification Acid rain has a much less harmful effect on oceans on a global scale, but it creates an amplified impact in the shallower waters of coastal waters. Acid rain can cause the ocean's pH to fall, known as ocean acidification, making it more difficult for different coastal species to create their exoskeletons that they need to survive. These coastal species link together as part of the ocean's food chain, and without them being a source for other marine life to feed off of, more marine life will die. Coral's limestone skeleton is particularly sensitive to pH decreases, because the calcium carbonate, a core component of the limestone skeleton, dissolves in acidic (low pH) solutions. In addition to acidification, excess nitrogen inputs from the atmosphere promote increased growth of phytoplankton and other marine plants, which, in turn, may cause more frequent harmful algal blooms and eutrophication (the creation of oxygen-depleted "dead zones") in some parts of the ocean. Human health effects Acid rain can negatively impact human health, especially when people breathe in particles released from acid rain. The effects of acid rain on human health are complex and may be seen in several ways, such as respiratory issues for long-term exposure and indirect exposure through contaminated food and water sources. Nitrogen Dioxide Effects Exposure to air pollutants associated with acid rain, such as nitrogen dioxide (NO2), may have a negative impact on respiratory health. Water-soluble nitrogen dioxide accumulates in the tiny airways, where it is transformed into nitric and nitrous acids. Pneumonia caused by nitric acids directly damages the epithelial cells lining the airways, resulting in pulmonary edema. Exposure to nitrogen dioxide also reduces the immune response by inhibiting the generation of inflammatory cytokines by alveolar macrophages in response to bacterial infection. In animal studies, the pollutant further reduces respiratory immunity by decreasing mucociliary clearance in the lower respiratory tract, which results in a reduced ability to remove respiratory infections. Sulfur Trioxide Effects The effects of sulfur trioxide and sulfuric acid are similar because they both produce sulfuric acid when they come into touch with the wet surfaces of your skin or respiratory system. The amount of SO3 breath through the mouth is larger than the amount of SO3 breath through the nose. When humans breathe in sulfur trioxide, small droplets of sulfuric acid will form inside the body and enter the respiratory tract to the lungs depending on the particle size. The effects of SO3 on the respiratory system lead to breathing difficulty in people who have asthma symptoms. Sulfur trioxide also causes very corrosive and irritation on the skin, eye, and gastrointestinal tracts when there is direct exposure to a specific concentration or long-term exposure. Consuming concentrated sulfuric acid has been known to burn the mouth and throat, erode a hole in the stomach, burns when it comes into contact with skin, make your eyes weep if it gets into them, and mortality. Federal Government's recommendation Nitrogen Dioxides A 25 parts per million (ppm) maximum for nitric oxide in working air has been set by the Occupational Safety and Health Administration (OSHA) for an 8-hour workday and a 40-hour workweek. Additionally, OSHA has established a 5-ppm nitrogen dioxide exposure limit for 15 minutes in the workplace. Sulfur Trioxide The not-to-exceed limits in the air, water, soil, or food that are recommended by regulations are often based on levels that affect animals before being modified to assist in safeguarding people. Depending on whether they employ different animal studies, have different exposure lengths (e.g., an 8-hour workday versus a 24-hour day), or for other reasons, these not-to-exceed values can vary between federal bodies. The amount of sulfur dioxide that can be emitted into the atmosphere is capped by the EPA. This reduces the quantity of sulfur dioxide in the air that turns into sulfur trioxide and sulfuric acid. Sulfuric acid concentrations in workroom air are restricted by OSHA to 1 mg/m3. Moreover, NIOSH advises a time-weighted average limit of 1 mg/m3. When you are aware of NO2 and SO3 exposure, you should talk to your doctor and ask people who are around you, especially children. Other adverse effects Acid rain can damage buildings, historic monuments, and statues, especially those made of rocks, such as limestone and marble, that contain large amounts of calcium carbonate. Acids in the rain react with the calcium compounds in the stones to create gypsum, which then flakes off. CaCO3 (s) + H2SO4 (aq) CaSO4 (s) + CO2 (g) + H2O (l) The effects of this are commonly seen on old gravestones, where acid rain can cause the inscriptions to become completely illegible. Acid rain also increases the corrosion rate of metals, in particular iron, steel, copper and bronze. Affected areas Places significantly impacted by acid rain around the globe include most of eastern Europe from Poland northward into Scandinavia, the eastern third of the United States, and southeastern Canada. Other affected areas include the southeastern coast of China and Taiwan. Prevention methods Technical solutions Many coal-firing power stations use flue-gas desulfurization (FGD) to remove sulfur-containing gases from their stack gases. For a typical coal-fired power station, FGD will remove 95% or more of the SO2 in the flue gases. An example of FGD is the wet scrubber which is commonly used. A wet scrubber is basically a reaction tower equipped with a fan that extracts hot smoke stack gases from a power plant into the tower. Lime or limestone in slurry form is also injected into the tower to mix with the stack gases and combine with the sulfur dioxide present. The calcium carbonate of the limestone produces pH-neutral calcium sulfate that is physically removed from the scrubber. That is, the scrubber turns sulfur pollution into industrial sulfates. In some areas the sulfates are sold to chemical companies as gypsum when the purity of calcium sulfate is high. In others, they are placed in landfill. The effects of acid rain can last for generations, as the effects of pH level change can stimulate the continued leaching of undesirable chemicals into otherwise pristine water sources, killing off vulnerable insect and fish species and blocking efforts to restore native life. Fluidized bed combustion also reduces the amount of sulfur emitted by power production. Vehicle emissions control reduces emissions of nitrogen oxides from motor vehicles. International treaties International treaties on the long-range transport of atmospheric pollutants have been agreed upon by western countries for some time now. Beginning in 1979, European countries convened in order to ratify general principles discussed during the UNECE Convention. The purpose was to combat Long-Range Transboundary Air Pollution. The 1985 Helsinki Protocol on the Reduction of Sulfur Emissions under the Convention on Long-Range Transboundary Air Pollution furthered the results of the convention. Results of the treaty have already come to fruition, as evidenced by an approximate 40 percent drop in particulate matter in North America. The effectiveness of the Convention in combatting acid rain has inspired further acts of international commitment to prevent the proliferation of particulate matter. Canada and the US signed the Air Quality Agreement in 1991. Most European countries and Canada signed the treaties. Activity of the Long-Range Transboundary Air Pollution Convention remained dormant after 1999, when 27 countries convened to further reduce the effects of acid rain. In 2000, foreign cooperation to prevent acid rain was sparked in Asia for the first time. Ten diplomats from countries ranging throughout the continent convened to discuss ways to prevent acid rain. Following these discussions, the Acid Deposition Monitoring Network in East Asia (EANET) was established in 2001 as an intergovernmental initiative to provide science-based inputs for decision makers and promote international cooperation on acid deposition in East Asia. In 2023, the EANET member countries include Cambodia, China, Indonesia, Japan, Lao PDR, Malaysia, Mongolia, Myanmar, the Philippines, Republic of Korea, Russia, Thailand and Vietnam. Emissions trading In this regulatory scheme, every current polluting facility is given or may purchase on an open market an emissions allowance for each unit of a designated pollutant it emits. Operators can then install pollution control equipment, and sell portions of their emissions allowances they no longer need for their own operations, thereby recovering some of the capital cost of their investment in such equipment. The intention is to give operators economic incentives to install pollution controls. The first emissions trading market was established in the United States by enactment of the Clean Air Act Amendments of 1990. The overall goal of the Acid Rain Program established by the Act is to achieve significant environmental and public health benefits through reductions in emissions of sulfur dioxide (SO2) and nitrogen oxides (NOx), the primary causes of acid rain. To achieve this goal at the lowest cost to society, the program employs both regulatory and market based approaches for controlling air pollution. See also Alkaline precipitation Citizen science – one of two 'first uses' of the term was in an acid rain campaign in 1989. List of environmental issues Lists of environmental topics Ocean acidification Rain dust (an alkaline rain) Soil retrogression and degradation References Further reading Ritchie, Hannah, "What We Learned from Acid Rain: By working together, the nations of the world can solve climate change", Scientific American, vol. 330, no. 1 (January 2024), pp. 75–76. "[C]ountries will act only if they know others are willing to do the same. With acid rain, they did act collectively.... We did something similar to restore Earth's protective ozone layer.... [T]he cost of technology really matters.... In the past decade the price of solar energy has fallen by more than 90 percent and that of wind energy by more than 70 percent. Battery costs have tumbled by 98 percent since 1990, bringing the price of electric cars down with them....[T]he stance of elected officials matters more than their party affiliation.... Change can happen – but not on its own. We need to drive it." (p. 76.) External links National Acid Precipitation Assessment Program Report – a 98-page report to Congress (2005) Acid rain for schools Acid rain for schools – Hubbard Brook United States Environmental Protection Agency – New England Acid Rain Program (superficial) Acid Rain (more depth than ref. above) U.S. Geological Survey – What is acid rain? Acid Rain: A Continuing National Tragedy – a report from The Adirondack Council on acid rain in the Adirondack region (1998) What Happens to Acid Rain? Acid Rain and how it affects fish and other aquatic organisms Fourth Report for Policy Makers (RPM4): Towards Clean Air for Sustainable Future in East Asia through Collaborative Activities- a report for policy-makers, Acid Deposition Monitoring Network in East Asia, EANET, (2019). Rain Pollution Air pollution Water pollution Forest pathology Environmental chemistry Sulfuric acid
Acid rain
[ "Chemistry", "Environmental_science" ]
7,114
[ "Environmental chemistry", "nan", "Water pollution" ]
3,277
https://en.wikipedia.org/wiki/Antioxidant
Antioxidants are compounds that inhibit oxidation (usually occurring as autoxidation), a chemical reaction that can produce free radicals. Autoxidation leads to degradation of organic compounds, including living matter. Antioxidants are frequently added to industrial products, such as polymers, fuels, and lubricants, to extend their usable lifetimes. Foods are also treated with antioxidants to forestall spoilage, in particular the rancidification of oils and fats. In cells, antioxidants such as glutathione, mycothiol, or bacillithiol, and enzyme systems like superoxide dismutase, can prevent damage from oxidative stress. Known dietary antioxidants are vitamins A, C, and E, but the term antioxidant has also been applied to numerous other dietary compounds that only have antioxidant properties in vitro, with little evidence for antioxidant properties in vivo. Dietary supplements marketed as antioxidants have not been shown to maintain health or prevent disease in humans. History As part of their adaptation from marine life, terrestrial plants began producing non-marine antioxidants such as ascorbic acid (vitamin C), polyphenols, and tocopherols. The evolution of angiosperm plants between 50 and 200 million years ago resulted in the development of many antioxidant pigments – particularly during the Jurassic period – as chemical defences against reactive oxygen species that are byproducts of photosynthesis. Originally, the term antioxidant specifically referred to a chemical that prevented the consumption of oxygen. In the late 19th and early 20th centuries, extensive study concentrated on the use of antioxidants in important industrial processes, such as the prevention of metal corrosion, the vulcanization of rubber, and the polymerization of fuels in the fouling of internal combustion engines. Early research on the role of antioxidants in biology focused on their use in preventing the oxidation of unsaturated fats, which is the cause of rancidity. Antioxidant activity could be measured simply by placing the fat in a closed container with oxygen and measuring the rate of oxygen consumption. However, it was the identification of vitamins C and E as antioxidants that revolutionized the field and led to the realization of the importance of antioxidants in the biochemistry of living organisms. The possible mechanisms of action of antioxidants were first explored when it was recognized that a substance with anti-oxidative activity is likely to be one that is itself readily oxidized. Research into how vitamin E prevents the process of lipid peroxidation led to the identification of antioxidants as reducing agents that prevent oxidative reactions, often by scavenging reactive oxygen species before they can damage cells. Uses in technology Food preservatives Antioxidants are used as food additives to help guard against food deterioration. Exposure to oxygen and sunlight are the two main factors in the oxidation of food, so food is preserved by keeping in the dark and sealing it in containers or even coating it in wax, as with cucumbers. However, as oxygen is also important for plant respiration, storing plant materials in anaerobic conditions produces unpleasant flavors and unappealing colors. Consequently, packaging of fresh fruits and vegetables contains an ≈8% oxygen atmosphere. Antioxidants are an especially important class of preservatives as, unlike bacterial or fungal spoilage, oxidation reactions still occur relatively rapidly in frozen or refrigerated food. These preservatives include natural antioxidants such as ascorbic acid (AA, E300) and tocopherols (E306), as well as synthetic antioxidants such as propyl gallate (PG, E310), tertiary butylhydroquinone (TBHQ), butylated hydroxyanisole (BHA, E320) and butylated hydroxytoluene (BHT, E321). Unsaturated fats can be highly susceptible to oxidation, causing rancidification. Oxidized lipids are often discolored and can impart unpleasant tastes and flavors. Thus, these foods are rarely preserved by drying; instead, they are preserved by smoking, salting, or fermenting. Even less fatty foods such as fruits are sprayed with sulfurous antioxidants prior to air drying. Metals catalyse oxidation. Some fatty foods such as olive oil are partially protected from oxidation by their natural content of antioxidants. Fatty foods are sensitive to photooxidation, which forms hydroperoxides by oxidizing unsaturated fatty acids and ester. Exposure to ultraviolet (UV) radiation can cause direct photooxidation and decompose peroxides and carbonyl molecules. These molecules undergo free radical chain reactions, but antioxidants inhibit them by preventing the oxidation processes. Cosmetics preservatives Antioxidant stabilizers are also added to fat-based cosmetics such as lipstick and moisturizers to prevent rancidity. Antioxidants in cosmetic products prevent oxidation of active ingredients and lipid content. For example, phenolic antioxidants such as stilbenes, flavonoids, and hydroxycinnamic acid strongly absorb UV radiation due to the presence of chromophores. They reduce oxidative stress from sun exposure by absorbing UV light. Industrial uses Antioxidants may be added to industrial products, such as stabilizers in fuels and additives in lubricants, to prevent oxidation and polymerization that leads to the formation of engine-fouling residues. Antioxidant polymer stabilizers are widely used to prevent the degradation of polymers, such as rubbers, plastics and adhesives, that causes a loss of strength and flexibility in these materials. Polymers containing double bonds in their main chains, such as natural rubber and polybutadiene, are especially susceptible to oxidation and ozonolysis. They can be protected by antiozonants. Oxidation can be accelerated by UV radiation in natural sunlight to cause photo-oxidation. Various specialised light stabilisers, such as HALS may be added to plastics to prevent this. An overview of some of the most applied antioxidants for polymer materials is shown below: (Hindered) Phenolic Antioxidants: Act by scavenging free radicals formed during the thermal oxidation process, thus preventing chain reactions that lead to polymer degradation. Examples: butylated hydroxytoluene, 2,4-dimethyl-6-tert-butylphenol, para tertiary butyl phenol, 2,6-di-tert-butylphenol, 1,3,5-Tris(4-(tert-butyl)-3-hydroxy-2,6-dimethylbenzyl)-1,3,5-triazinane-2,4,6-trione Phosphites: Act by decomposing peroxides into non-radical products, thus preventing further generation of free radicals, and contributing to the overall oxidate stability of the polymer. Phosphites are often used in combination with phenolic antioxidants for syngeristic effects. Example: tris(2,4-di-tert-butylphenyl)phosphite Thioesters: Act by decomposing peroxides into non-radical products. Thioesters are also used as co-stabilisers with primary antioxidants. Hindered Amine Light Stabilizers (HALS): HALS act by scavenging free radicals generated during photo-oxidation, thus preventing the polymer material from UV radiation. Vitamins: Naturally occurring antioxidants like Vitamin C and Vitamin E are used for specific applications. Blends: Blends of different types of antioxidants are commonly applied, as they can serve various and multiple purposes. Environmental and health hazards Synthetic phenolic antioxidants (SPAs) and aminic antioxidants have potential human and environmental health hazards. SPAs are common in indoor dust, small air particles, sediment, sewage, river water and wastewater. They are synthesized from phenolic compounds and include 2,6-di-tert-butyl-4-methylphenol (BHT), 2,6-di-tert-butyl-p-benzoquinone (BHT-Q), 2,4-di-tert-butyl-phenol (DBP) and 3-tert-butyl-4-hydroxyanisole (BHA). BHT can cause hepatotoxicity and damage to the endocrine system and may increase tumor development rates due to 1,1-dimethylhydrazine. BHT-Q can cause DNA damage and mismatches through the cleavage process, generating superoxide radicals. DBP is toxic to marine life if exposed long-term. Phenolic antioxidants have low biodegradability, but they do not have severe toxicity toward aquatic organisms at low concentrations. Another type of antioxidant, diphenylamine (DPA), is commonly used in the production of commercial, industrial lubricants and rubber products and it also acts as a supplement for automotive engine oils. Oxidative challenge in biology The vast majority of complex life on Earth requires oxygen for its metabolism, but this same oxygen is a highly reactive element that can damage living organisms. Organisms contain chemicals and enzymes that minimize this oxidative damage without interfering with the beneficial effect of oxygen. In general, antioxidant systems either prevent these reactive species from being formed, or remove them, thus minimizing their damage. Reactive oxygen species can have useful cellular functions, such as redox signaling. Thus, ideally, antioxidant systems do not remove oxidants entirely, but maintain them at some optimum concentration. Reactive oxygen species produced in cells include hydrogen peroxide (H2O2), hypochlorous acid (HClO), and free radicals such as the hydroxyl radical (·OH), and the superoxide anion (O2−). The hydroxyl radical is particularly unstable and will react rapidly and non-specifically with most biological molecules. This species is produced from hydrogen peroxide in metal-catalyzed redox reactions such as the Fenton reaction. These oxidants can damage cells by starting chemical chain reactions such as lipid peroxidation, or by oxidizing DNA or proteins. Damage to DNA can cause mutations and possibly cancer, if not reversed by DNA repair mechanisms, while damage to proteins causes enzyme inhibition, denaturation, and protein degradation. The use of oxygen as part of the process for generating metabolic energy produces reactive oxygen species. In this process, the superoxide anion is produced as a by-product of several steps in the electron transport chain. Particularly important is the reduction of coenzyme Q in complex III, since a highly reactive free radical is formed as an intermediate (Q·−). This unstable intermediate can lead to electron "leakage", when electrons jump directly to oxygen and form the superoxide anion, instead of moving through the normal series of well-controlled reactions of the electron transport chain. Peroxide is also produced from the oxidation of reduced flavoproteins, such as complex I. However, although these enzymes can produce oxidants, the relative importance of the electron transfer chain to other processes that generate peroxide is unclear. In plants, algae, and cyanobacteria, reactive oxygen species are also produced during photosynthesis, particularly under conditions of high light intensity. This effect is partly offset by the involvement of carotenoids in photoinhibition, and in algae and cyanobacteria, by large amount of iodide and selenium, which involves these antioxidants reacting with over-reduced forms of the photosynthetic reaction centres to prevent the production of reactive oxygen species. Examples of bioactive antioxidant compounds Physiological antioxidants are classified into two broad divisions, depending on whether they are soluble in water (hydrophilic) or in lipids (lipophilic). In general, water-soluble antioxidants react with oxidants in the cell cytosol and the blood plasma, while lipid-soluble antioxidants protect cell membranes from lipid peroxidation. These compounds may be synthesized in the body or obtained from the diet. The different antioxidants are present at a wide range of concentrations in body fluids and tissues, with some such as glutathione or ubiquinone mostly present within cells, while others such as uric acid are more systemically distributed (see table below). Some antioxidants are only found in a few organisms, and can be pathogens or virulence factors. The interactions between these different antioxidants may be synergistic and interdependent. The action of one antioxidant may therefore depend on the proper function of other members of the antioxidant system. The amount of protection provided by any one antioxidant will also depend on its concentration, its reactivity towards the particular reactive oxygen species being considered, and the status of the antioxidants with which it interacts. Some compounds contribute to antioxidant defense by chelating transition metals and preventing them from catalyzing the production of free radicals in the cell. The ability to sequester iron for iron-binding proteins, such as transferrin and ferritin, is one such function. Selenium and zinc are commonly referred to as antioxidant minerals, but these chemical elements have no antioxidant action themselves, but rather are required for the activity of antioxidant enzymes, such as glutathione reductase and superoxide dismutase. (See also selenium in biology and zinc in biology.) Uric acid Uric acid has the highest concentration of any blood antioxidant and provides over half of the total antioxidant capacity of human serum. Uric acid's antioxidant activities are also complex, given that it does not react with some oxidants, such as superoxide, but does act against peroxynitrite, peroxides, and hypochlorous acid. Concerns over elevated UA's contribution to gout must be considered one of many risk factors. By itself, UA-related risk of gout at high levels (415–530 μmol/L) is only 0.5% per year with an increase to 4.5% per year at UA supersaturation levels (535+ μmol/L). Many of these aforementioned studies determined UA's antioxidant actions within normal physiological levels, and some found antioxidant activity at levels as high as 285 μmol/L. Vitamin C Ascorbic acid or vitamin C, an oxidation-reduction (redox) catalyst found in both animals and plants, can reduce, and thereby neutralize, reactive oxygen species such as hydrogen peroxide. In addition to its direct antioxidant effects, ascorbic acid is also a substrate for the redox enzyme ascorbate peroxidase, a function that is used in stress resistance in plants. Ascorbic acid is present at high levels in all parts of plants and can reach concentrations of 20 millimolar in chloroplasts. Glutathione Glutathione has antioxidant properties since the thiol group in its cysteine moiety is a reducing agent and can be reversibly oxidized and reduced. In cells, glutathione is maintained in the reduced form by the enzyme glutathione reductase and in turn reduces other metabolites and enzyme systems, such as ascorbate in the glutathione-ascorbate cycle, glutathione peroxidases and glutaredoxins, as well as reacting directly with oxidants. Due to its high concentration and its central role in maintaining the cell's redox state, glutathione is one of the most important cellular antioxidants. In some organisms glutathione is replaced by other thiols, such as by mycothiol in the Actinomycetes, bacillithiol in some gram-positive bacteria, or by trypanothione in the Kinetoplastids. Vitamin E Vitamin E is the collective name for a set of eight related tocopherols and tocotrienols, which are fat-soluble vitamins with antioxidant properties. Of these, α-tocopherol has been most studied as it has the highest bioavailability, with the body preferentially absorbing and metabolising this form. It has been claimed that the α-tocopherol form is the most important lipid-soluble antioxidant, and that it protects membranes from oxidation by reacting with lipid radicals produced in the lipid peroxidation chain reaction. This removes the free radical intermediates and prevents the propagation reaction from continuing. This reaction produces oxidised α-tocopheroxyl radicals that can be recycled back to the active reduced form through reduction by other antioxidants, such as ascorbate, retinol or ubiquinol. This is in line with findings showing that α-tocopherol, but not water-soluble antioxidants, efficiently protects glutathione peroxidase 4 (GPX4)-deficient cells from cell death. GPx4 is the only known enzyme that efficiently reduces lipid-hydroperoxides within biological membranes. However, the roles and importance of the various forms of vitamin E are presently unclear, and it has even been suggested that the most important function of α-tocopherol is as a signaling molecule, with this molecule having no significant role in antioxidant metabolism. The functions of the other forms of vitamin E are even less well understood, although γ-tocopherol is a nucleophile that may react with electrophilic mutagens, and tocotrienols may be important in protecting neurons from damage. Pro-oxidant activities Antioxidants that are reducing agents can also act as pro-oxidants. For example, vitamin C has antioxidant activity when it reduces oxidizing substances such as hydrogen peroxide; however, it will also reduce metal ions such as iron and copper that generate free radicals through the Fenton reaction. While ascorbic acid is effective antioxidant, it can also oxidatively change the flavor and color of food. With the presence of transition metals, there are low concentrations of ascorbic acid that can act as a radical scavenger in the Fenton reaction. 2 Fe3+ + Ascorbate → 2 Fe2+ + Dehydroascorbate 2 Fe2+ + 2 H2O2 → 2 Fe3+ + 2 OH· + 2 OH− The relative importance of the antioxidant and pro-oxidant activities of antioxidants is an area of current research, but vitamin C, which exerts its effects as a vitamin by oxidizing polypeptides, appears to have a mostly antioxidant action in the human body. Enzyme systems As with the chemical antioxidants, cells are protected against oxidative stress by an interacting network of antioxidant enzymes. Here, the superoxide released by processes such as oxidative phosphorylation is first converted to hydrogen peroxide and then further reduced to give water. This detoxification pathway is the result of multiple enzymes, with superoxide dismutases catalysing the first step and then catalases and various peroxidases removing hydrogen peroxide. As with antioxidant metabolites, the contributions of these enzymes to antioxidant defenses can be hard to separate from one another, but the generation of transgenic mice lacking just one antioxidant enzyme can be informative. Superoxide dismutase, catalase, and peroxiredoxins Superoxide dismutases (SODs) are a class of closely related enzymes that catalyze the breakdown of the superoxide anion into oxygen and hydrogen peroxide. SOD enzymes are present in almost all aerobic cells and in extracellular fluids. Superoxide dismutase enzymes contain metal ion cofactors that, depending on the isozyme, can be copper, zinc, manganese or iron. In humans, the copper/zinc SOD is present in the cytosol, while manganese SOD is present in the mitochondrion. There also exists a third form of SOD in extracellular fluids, which contains copper and zinc in its active sites. The mitochondrial isozyme seems to be the most biologically important of these three, since mice lacking this enzyme die soon after birth. In contrast, the mice lacking copper/zinc SOD (Sod1) are viable but have numerous pathologies and a reduced lifespan (see article on superoxide), while mice without the extracellular SOD have minimal defects (sensitive to hyperoxia). In plants, SOD isozymes are present in the cytosol and mitochondria, with an iron SOD found in chloroplasts that is absent from vertebrates and yeast. Catalases are enzymes that catalyse the conversion of hydrogen peroxide to water and oxygen, using either an iron or manganese cofactor. This protein is localized to peroxisomes in most eukaryotic cells. Catalase is an unusual enzyme since, although hydrogen peroxide is its only substrate, it follows a ping-pong mechanism. Here, its cofactor is oxidised by one molecule of hydrogen peroxide and then regenerated by transferring the bound oxygen to a second molecule of substrate. Despite its apparent importance in hydrogen peroxide removal, humans with genetic deficiency of catalase — "acatalasemia" — or mice genetically engineered to lack catalase completely, experience few ill effects. Peroxiredoxins are peroxidases that catalyze the reduction of hydrogen peroxide, organic hydroperoxides, as well as peroxynitrite. They are divided into three classes: typical 2-cysteine peroxiredoxins; atypical 2-cysteine peroxiredoxins; and 1-cysteine peroxiredoxins. These enzymes share the same basic catalytic mechanism, in which a redox-active cysteine (the peroxidatic cysteine) in the active site is oxidized to a sulfenic acid by the peroxide substrate. Over-oxidation of this cysteine residue in peroxiredoxins inactivates these enzymes, but this can be reversed by the action of sulfiredoxin. Peroxiredoxins seem to be important in antioxidant metabolism, as mice lacking peroxiredoxin 1 or 2 have shortened lifespans and develop hemolytic anaemia, while plants use peroxiredoxins to remove hydrogen peroxide generated in chloroplasts. Thioredoxin and glutathione systems The thioredoxin system contains the 12-kDa protein thioredoxin and its companion thioredoxin reductase. Proteins related to thioredoxin are present in all sequenced organisms. Plants, such as Arabidopsis thaliana, have a particularly great diversity of isoforms. The active site of thioredoxin consists of two neighboring cysteines, as part of a highly conserved CXXC motif, that can cycle between an active dithiol form (reduced) and an oxidized disulfide form. In its active state, thioredoxin acts as an efficient reducing agent, scavenging reactive oxygen species and maintaining other proteins in their reduced state. After being oxidized, the active thioredoxin is regenerated by the action of thioredoxin reductase, using NADPH as an electron donor. The glutathione system includes glutathione, glutathione reductase, glutathione peroxidases, and glutathione S-transferases. This system is found in animals, plants and microorganisms. Glutathione peroxidase is an enzyme containing four selenium-cofactors that catalyzes the breakdown of hydrogen peroxide and organic hydroperoxides. There are at least four different glutathione peroxidase isozymes in animals. Glutathione peroxidase 1 is the most abundant and is a very efficient scavenger of hydrogen peroxide, while glutathione peroxidase 4 is most active with lipid hydroperoxides. Surprisingly, glutathione peroxidase 1 is dispensable, as mice lacking this enzyme have normal lifespans, but they are hypersensitive to induced oxidative stress. In addition, the glutathione S-transferases show high activity with lipid peroxides. These enzymes are at particularly high levels in the liver and also serve in detoxification metabolism. Health research Relation to diet The dietary antioxidant vitamins A, C, and E are essential and required in specific daily amounts to prevent diseases. Polyphenols, which have antioxidant properties in vitro due to their free hydroxy groups, are extensively metabolized by catechol-O-methyltransferase which methylates free hydroxyl groups, and thereby prevents them from acting as antioxidants in vivo. Interactions Common pharmaceuticals (and supplements) with antioxidant properties may interfere with the efficacy of certain anticancer medication and radiation therapy. Pharmaceuticals and supplements that have antioxidant properties suppress the formation of free radicals by inhibiting oxidation processes. Radiation therapy induce oxidative stress that damages essential components of cancer cells, such as proteins, nucleic acids, and lipids that comprise cell membranes. Adverse effects Relatively strong reducing acids can have antinutrient effects by binding to dietary minerals such as iron and zinc in the gastrointestinal tract and preventing them from being absorbed. Examples are oxalic acid, tannins and phytic acid, which are high in plant-based diets. Calcium and iron deficiencies are not uncommon in diets in developing countries where less meat is eaten and there is high consumption of phytic acid from beans and unleavened whole grain bread. However, germination, soaking, or microbial fermentation are all household strategies that reduce the phytate and polyphenol content of unrefined cereal. Increases in Fe, Zn and Ca absorption have been reported in adults fed dephytinized cereals compared with cereals containing their native phytate. High doses of some antioxidants may have harmful long-term effects. The Beta-Carotene and Retinol Efficacy Trial (CARET) study of lung cancer patients found that smokers given supplements containing beta-carotene and vitamin A had increased rates of lung cancer. Subsequent studies confirmed these adverse effects. These harmful effects may also be seen in non-smokers, as one meta-analysis including data from approximately 230,000 patients showed that β-carotene, vitamin A or vitamin E supplementation is associated with increased mortality, but saw no significant effect from vitamin C. No health risk was seen when all the randomized controlled studies were examined together, but an increase in mortality was detected when only high-quality and low-bias risk trials were examined separately. As the majority of these low-bias trials dealt with either elderly people, or people with disease, these results may not apply to the general population. This meta-analysis was later repeated and extended by the same authors, confirming the previous results. These two publications are consistent with some previous meta-analyses that also suggested that vitamin E supplementation increased mortality, and that antioxidant supplements increased the risk of colon cancer. Beta-carotene may also increase lung cancer. Overall, the large number of clinical trials carried out on antioxidant supplements suggest that either these products have no effect on health, or that they cause a small increase in mortality in elderly or vulnerable populations. Exercise and muscle soreness A 2017 review showed that taking antioxidant dietary supplements before or after exercise is unlikely to produce a noticeable reduction in muscle soreness after a person exercises. Levels in food Antioxidant vitamins are found in vegetables, fruits, eggs, legumes and nuts. Vitamins A, C, and E can be destroyed by long-term storage or prolonged cooking. The effects of cooking and food processing are complex, as these processes can also increase the bioavailability of antioxidants, such as some carotenoids in vegetables. Processed food contains fewer antioxidant vitamins than fresh and uncooked foods, as preparation exposes food to heat and oxygen. Other antioxidants are not obtained from the diet, but instead are made in the body. For example, ubiquinol (coenzyme Q) is poorly absorbed from the gut and is made through the mevalonate pathway. Another example is glutathione, which is made from amino acids. As any glutathione in the gut is broken down to free cysteine, glycine and glutamic acid before being absorbed, even large oral intake has little effect on the concentration of glutathione in the body. Although large amounts of sulfur-containing amino acids such as acetylcysteine can increase glutathione, no evidence exists that eating high levels of these glutathione precursors is beneficial for healthy adults. Measurement and invalidation of ORAC Measurement of polyphenol and carotenoid content in food is not a straightforward process, as antioxidants collectively are a diverse group of compounds with different reactivities to various reactive oxygen species. In food science analyses in vitro, the oxygen radical absorbance capacity (ORAC) was once an industry standard for estimating antioxidant strength of whole foods, juices and food additives, mainly from the presence of polyphenols. Earlier measurements and ratings by the United States Department of Agriculture were withdrawn in 2012 as biologically irrelevant to human health, referring to an absence of physiological evidence for polyphenols having antioxidant properties in vivo. Consequently, the ORAC method, derived only from in vitro experiments, is no longer considered relevant to human diets or biology, as of 2010. Alternative in vitro measurements of antioxidant content in foods – also based on the presence of polyphenols – include the Folin-Ciocalteu reagent, and the Trolox equivalent antioxidant capacity assay. References Further reading External links Anti-aging substances Physiology Process chemicals Redox
Antioxidant
[ "Chemistry", "Biology" ]
6,528
[ "Redox", "Physiology", "Anti-aging substances", "Senescence", "Electrochemistry", "nan", "Process chemicals" ]
3,292
https://en.wikipedia.org/wiki/Brass
Brass is an alloy of copper and zinc, in proportions which can be varied to achieve different colours and mechanical, electrical, acoustic and chemical properties, but copper typically has the larger proportion, generally 66% copper and 34% zinc. In use since prehistoric times, it is a substitutional alloy: atoms of the two constituents may replace each other within the same crystal structure. Brass is similar to bronze, a copper alloy that contains tin instead of zinc. Both bronze and brass may include small proportions of a range of other elements including arsenic, lead, phosphorus, aluminium, manganese and silicon. Historically, the distinction between the two alloys has been less consistent and clear, and increasingly museums use the more general term "copper alloy". Brass has long been a popular material for its bright gold-like appearance and is still used for drawer pulls and doorknobs. It has also been widely used to make sculpture and utensils because of its low melting point, high workability (both with hand tools and with modern turning and milling machines), durability, and electrical and thermal conductivity. Brasses with higher copper content are softer and more golden in colour; conversely those with less copper and thus more zinc are harder and more silvery in colour. Brass is still commonly used in applications where corrosion resistance and low friction are required, such as locks, hinges, gears, bearings, ammunition casings, zippers, plumbing, hose couplings, valves, SCUBA regulators, and electrical plugs and sockets. It is used extensively for musical instruments such as horns and bells. The composition of brass makes it a favorable substitute for copper in costume jewelry and fashion jewelry, as it exhibits greater resistance to corrosion. Brass is not as hard as bronze and so is not suitable for most weapons and tools. Nor is it suitable for marine uses, because the zinc reacts with minerals in salt water, leaving porous copper behind; marine brass, with added tin, avoids this, as does bronze. Brass is often used in situations in which it is important that sparks not be struck, such as in fittings and tools used near flammable or explosive materials. Properties Brass is more malleable than bronze or zinc. The relatively low melting point of brass (, depending on composition) and its flow characteristics make it a relatively easy material to cast. By varying the proportions of copper and zinc, the properties of the brass can be changed, allowing hard and soft brasses. The density of brass is . Today, almost 90% of all brass alloys are recycled. Because brass is not ferromagnetic, ferrous scrap can be separated from it by passing the scrap near a powerful magnet. Brass scrap is melted and recast into billets that are extruded into the desired form and size. The general softness of brass means that it can often be machined without the use of cutting fluid, though there are exceptions to this. Aluminium makes brass stronger and more corrosion-resistant. Aluminium also causes a highly beneficial hard layer of aluminium oxide (Al2O3) to be formed on the surface that is thin, transparent, and self-healing. Tin has a similar effect and finds its use especially in seawater applications (naval brasses). Combinations of iron, aluminium, silicon, and manganese make brass wear- and tear-resistant. The addition of as little as 1% iron to a brass alloy will result in an alloy with a noticeable magnetic attraction. Brass will corrode in the presence of moisture, chlorides, acetates, ammonia, and certain acids. This often happens when the copper reacts with sulfur to form a brown and eventually black surface layer of copper sulfide which, if regularly exposed to slightly acidic water such as urban rainwater, can then oxidize in air to form a patina of green-blue copper carbonate. Depending on how the patina layer was formed, it may protect the underlying brass from further damage. Although copper and zinc have a large difference in electrical potential, the resulting brass alloy does not experience internalized galvanic corrosion because of the absence of a corrosive environment within the mixture. However, if brass is placed in contact with a more noble metal such as silver or gold in such an environment, the brass will corrode galvanically; conversely, if brass is in contact with a less-noble metal such as zinc or iron, the less noble metal will corrode and the brass will be protected. Lead content To enhance the machinability of brass, lead is often added in concentrations of about 2%. Since lead has a lower melting point than the other constituents of the brass, it tends to migrate towards the grain boundaries in the form of globules as it cools from casting. The pattern the globules form on the surface of the brass increases the available lead surface area which, in turn, affects the degree of leaching. In addition, cutting operations can smear the lead globules over the surface. These effects can lead to significant lead leaching from brasses of comparatively low lead content. In October 1999, the California State Attorney General sued 13 key manufacturers and distributors over lead content. In laboratory tests, state researchers found the average brass key, new or old, exceeded the California Proposition 65 limits by an average factor of 19, assuming handling twice a day. In April 2001 manufacturers agreed to reduce lead content to 1.5%, or face a requirement to warn consumers about lead content. Keys plated with other metals are not affected by the settlement, and may continue to use brass alloys with a higher percentage of lead content. Also in California, lead-free materials must be used for "each component that comes into contact with the wetted surface of pipes and pipe fittings, plumbing fittings and fixtures". On 1 January 2010, the maximum amount of lead in "lead-free brass" in California was reduced from 4% to 0.25% lead. Corrosion-resistant brass for harsh environments Dezincification-resistant (DZR or DR) brasses, sometimes referred to as CR (corrosion resistant) brasses, are used where there is a large corrosion risk and where normal brasses do not meet the requirements. Applications with high water temperatures, chlorides present or deviating water qualities (soft water) play a role. DZR-brass is used in water boiler systems. This brass alloy must be produced with great care, with special attention placed on a balanced composition and proper production temperatures and parameters to avoid long-term failures. An example of DZR brass is the C352 brass, with about 30% zinc, 61–63% copper, 1.7–2.8% lead, and 0.02–0.15% arsenic. The lead and arsenic significantly suppress the zinc loss. "Red brasses", a family of alloys with high copper proportion and generally less than 15% zinc, are more resistant to zinc loss. One of the metals called "red brass" is 85% copper, 5% tin, 5% lead, and 5% zinc. Copper alloy C23000, which is also known as "red brass", contains 84–86% copper, 0.05% each iron and lead, with the balance being zinc. Another such material is gunmetal, from the family of red brasses. Gunmetal alloys contain roughly 88% copper, 8–10% tin, and 2–4% zinc. Lead can be added for ease of machining or for bearing alloys. "Naval brass", for use in seawater, contains 40% zinc but also 1% tin. The tin addition suppresses zinc-leaching. The NSF International requires brasses with more than 15% zinc, used in piping and plumbing fittings, to be dezincification-resistant. Use in musical instruments The high malleability and workability, relatively good resistance to corrosion, and traditionally attributed acoustic properties of brass, have made it the usual metal of choice for construction of musical instruments whose acoustic resonators consist of long, relatively narrow tubing, often folded or coiled for compactness; silver and its alloys, and even gold, have been used for the same reasons, but brass is the most economical choice. Collectively known as brass instruments, or simply 'the brass', these include the trombone, tuba, trumpet, cornet, flugelhorn, baritone horn, euphonium, tenor horn, and French horn, and many other "horns", many in variously sized families, such as the saxhorns. Other wind instruments may be constructed of brass or other metals, and indeed most modern student-model flutes and piccolos are made of some variety of brass, usually a cupronickel alloy similar to nickel silver (also known as German silver). Clarinets, especially low clarinets such as the contrabass and subcontrabass, are sometimes made of metal because of limited supplies of the dense, fine-grained tropical hardwoods traditionally preferred for smaller woodwinds. For the same reason, some low clarinets, bassoons and contrabassoons feature a hybrid construction, with long, straight sections of wood, and curved joints, neck, and/or bell of metal. The use of metal also avoids the risks of exposing wooden instruments to changes in temperature or humidity, which can cause sudden cracking. Even though the saxophones and sarrusophones are classified as woodwind instruments, they are normally made of brass for similar reasons, and because their wide, conical bores and thin-walled bodies are more easily and efficiently made by forming sheet metal than by machining wood. The keywork of most modern woodwinds, including wooden-bodied instruments, is also usually made of an alloy such as nickel silver. Such alloys are stiffer and more durable than the brass used to construct the instrument bodies, but still workable with simple hand tools—a boon to quick repairs. The mouthpieces of both brass instruments and, less commonly, woodwind instruments are often made of brass among other metals as well. Next to the brass instruments, the most notable use of brass in music is in various percussion instruments, most notably cymbals, gongs, and orchestral (tubular) bells (large "church" bells are normally made of bronze). Small handbells and "jingle bells" are also commonly made of brass. The harmonica is a free reed aerophone, also often made from brass. In organ pipes of the reed family, brass strips (called tongues) are used as the reeds, which beat against the shallot (or beat "through" the shallot in the case of a "free" reed). Although not part of the brass section, snare drums are also sometimes made of brass. Some parts on electric guitars are also made from brass, especially inertia blocks on tremolo systems for its tonal properties, and for string nuts and saddles for both tonal properties and its low friction. Germicidal and antimicrobial applications The bactericidal properties of brass have been observed for centuries, particularly in marine environments where it prevents biofouling. Depending upon the type and concentration of pathogens and the medium they are in, brass kills these microorganisms within a few minutes to hours of contact. A large number of independent studies confirm this antimicrobial effect, even against antibiotic-resistant bacteria such as MRSA and VRSA. The mechanisms of antimicrobial action by copper and its alloys, including brass, are a subject of intense and ongoing investigation. Season cracking Brass is susceptible to stress corrosion cracking, especially from ammonia or substances containing or releasing ammonia. The problem is sometimes known as season cracking after it was first discovered in brass cartridges used for rifle ammunition during the 1920s in the British Indian Army. The problem was caused by high residual stresses from cold forming of the cases during manufacture, together with chemical attack from traces of ammonia in the atmosphere. The cartridges were stored in stables and the ammonia concentration rose during the hot summer months, thus initiating brittle cracks. The problem was resolved by annealing the cases, and storing the cartridges elsewhere. Types Other phases than α, β and γ are ε, a hexagonal intermetallic CuZn3, and η, a solid solution of copper in zinc. Brass alloys History Although forms of brass have been in use since prehistory, its true nature as a copper-zinc alloy was not understood until the post-medieval period because the zinc vapor which reacted with copper to make brass was not recognized as a metal. The King James Bible makes many references to "brass" to translate "nechosheth" (bronze or copper) from Hebrew to English. The earliest brasses may have been natural alloys made by smelting zinc-rich copper ores. By the Roman period brass was being deliberately produced from metallic copper and zinc minerals using the cementation process, the product of which was calamine brass, and variations on this method continued until the mid-19th century. It was eventually replaced by speltering, the direct alloying of copper and zinc metal which was introduced to Europe in the 16th century. Brass has sometimes historically been referred to as "yellow copper". Early copper-zinc alloys In West Asia and the Eastern Mediterranean early copper-zinc alloys are now known in small numbers from a number of 3rd millennium BC sites in the Aegean, Iraq, the United Arab Emirates, Kalmykia, Turkmenistan and Georgia and from 2nd millennium BC sites in western India, Uzbekistan, Iran, Syria, Iraq and Canaan. Isolated examples of copper-zinc alloys are known in China from the 1st century AD, long after bronze was widely used. The compositions of these early "brass" objects are highly variable and most have zinc contents of between 5% and 15% wt which is lower than in brass produced by cementation. These may be "natural alloys" manufactured by smelting zinc rich copper ores in redox conditions. Many have similar tin contents to contemporary bronze artefacts and it is possible that some copper-zinc alloys were accidental and perhaps not even distinguished from copper. However the large number of copper-zinc alloys now known suggests that at least some were deliberately manufactured and many have zinc contents of more than 12% wt which would have resulted in a distinctive golden colour. By the 8th–7th century BC Assyrian cuneiform tablets mention the exploitation of the "copper of the mountains" and this may refer to "natural" brass. "Oreikhalkon" (mountain copper), the Ancient Greek translation of this term, was later adapted to the Latin aurichalcum meaning "golden copper" which became the standard term for brass. In the 4th century BC Plato knew orichalkos as rare and nearly as valuable as gold and Pliny describes how aurichalcum had come from Cypriot ore deposits which had been exhausted by the 1st century AD. X-ray fluorescence analysis of 39 orichalcum ingots recovered from a 2,600-year-old shipwreck off Sicily found them to be an alloy made with 75–80% copper, 15–20% zinc and small percentages of nickel, lead and iron. Roman world During the later part of first millennium BC the use of brass spread across a wide geographical area from Britain and Spain in the west to Iran, and India in the east. This seems to have been encouraged by exports and influence from the Middle East and eastern Mediterranean where deliberate production of brass from metallic copper and zinc ores had been introduced. The 4th century BC writer Theopompus, quoted by Strabo, describes how heating earth from Andeira in Turkey produced "droplets of false silver", probably metallic zinc, which could be used to turn copper into oreichalkos. In the 1st century BC the Greek Dioscorides seems to have recognized a link between zinc minerals and brass describing how Cadmia (zinc oxide) was found on the walls of furnaces used to heat either zinc ore or copper and explaining that it can then be used to make brass. By the first century BC brass was available in sufficient supply to use as coinage in Phrygia and Bithynia, and after the Augustan currency reform of 23 BC it was also used to make Roman dupondii and sestertii. The uniform use of brass for coinage and military equipment across the Roman world may indicate a degree of state involvement in the industry, and brass even seems to have been deliberately boycotted by Jewish communities in Palestine because of its association with Roman authority. Brass was produced by the cementation process where copper and zinc ore are heated together until zinc vapor is produced which reacts with the copper. There is good archaeological evidence for this process and crucibles used to produce brass by cementation have been found on Roman period sites including Xanten and Nidda in Germany, Lyon in France and at a number of sites in Britain. They vary in size from tiny acorn sized to large amphorae like vessels but all have elevated levels of zinc on the interior and are lidded. They show no signs of slag or metal prills suggesting that zinc minerals were heated to produce zinc vapor which reacted with metallic copper in a solid state reaction. The fabric of these crucibles is porous, probably designed to prevent a buildup of pressure, and many have small holes in the lids which may be designed to release pressure or to add additional zinc minerals near the end of the process. Dioscorides mentioned that zinc minerals were used for both the working and finishing of brass, perhaps suggesting secondary additions. Brass made during the early Roman period seems to have varied between 20% and 28% wt zinc. The high content of zinc in coinage and brass objects declined after the first century AD and it has been suggested that this reflects zinc loss during recycling and thus an interruption in the production of new brass. However it is now thought this was probably a deliberate change in composition and overall the use of brass increases over this period making up around 40% of all copper alloys used in the Roman world by the 4th century AD. Medieval period Little is known about the production of brass during the centuries immediately after the collapse of the Roman Empire. Disruption in the trade of tin for bronze from Western Europe may have contributed to the increasing popularity of brass in the east and by the 6th–7th centuries AD over 90% of copper alloy artefacts from Egypt were made of brass. However other alloys such as low tin bronze were also used and they vary depending on local cultural attitudes, the purpose of the metal and access to zinc, especially between the Islamic and Byzantine world. Conversely the use of true brass seems to have declined in Western Europe during this period in favor of gunmetals and other mixed alloys but by about 1000 brass artefacts are found in Scandinavian graves in Scotland, brass was being used in the manufacture of coins in Northumbria and there is archaeological and historical evidence for the production of calamine brass in Germany and the Low Countries, areas rich in calamine ore. These places would remain important centres of brass making throughout the Middle Ages period, especially Dinant. Brass objects are still collectively known as dinanderie in French. The baptismal font at St Bartholomew's Church, Liège in modern Belgium (before 1117) is an outstanding masterpiece of Romanesque brass casting, though also often described as bronze. The metal of the early 12th-century Gloucester Candlestick is unusual even by medieval standards in being a mixture of copper, zinc, tin, lead, nickel, iron, antimony and arsenic with an unusually large amount of silver, ranging from 22.5% in the base to 5.76% in the pan below the candle. The proportions of this mixture may suggest that the candlestick was made from a hoard of old coins, probably Late Roman. Latten is a term for medieval alloys of uncertain and often variable composition often covering decorative borders and similar objects cut from sheet metal, whether of brass or bronze. Especially in Tibetan art, analysis of some objects shows very different compositions from different ends of a large piece. Aquamaniles were typically made in brass in both the European and Islamic worlds. The cementation process continued to be used but literary sources from both Europe and the Islamic world seem to describe variants of a higher temperature liquid process which took place in open-topped crucibles. Islamic cementation seems to have used zinc oxide known as tutiya or tutty rather than zinc ores for brass-making, resulting in a metal with lower iron impurities. A number of Islamic writers and the 13th century Italian Marco Polo describe how this was obtained by sublimation from zinc ores and condensed onto clay or iron bars, archaeological examples of which have been identified at Kush in Iran. It could then be used for brass making or medicinal purposes. In 10th century Yemen al-Hamdani described how spreading al-iglimiya, probably zinc oxide, onto the surface of molten copper produced tutiya vapor which then reacted with the metal. The 13th century Iranian writer al-Kashani describes a more complex process whereby tutiya was mixed with raisins and gently roasted before being added to the surface of the molten metal. A temporary lid was added at this point presumably to minimize the escape of zinc vapor. In Europe a similar liquid process in open-topped crucibles took place which was probably less efficient than the Roman process and the use of the term tutty by Albertus Magnus in the 13th century suggests influence from Islamic technology. The 12th century German monk Theophilus described how preheated crucibles were one sixth filled with powdered calamine and charcoal then topped up with copper and charcoal before being melted, stirred then filled again. The final product was cast, then again melted with calamine. It has been suggested that this second melting may have taken place at a lower temperature to allow more zinc to be absorbed. Albertus Magnus noted that the "power" of both calamine and tutty could evaporate and described how the addition of powdered glass could create a film to bind it to the metal. German brass making crucibles are known from Dortmund dating to the 10th century AD and from Soest and Schwerte in Westphalia dating to around the 13th century confirm Theophilus' account, as they are open-topped, although ceramic discs from Soest may have served as loose lids which may have been used to reduce zinc evaporation, and have slag on the interior resulting from a liquid process. Africa Some of the most famous objects in African art are the lost wax castings of West Africa, mostly from what is now Nigeria, produced first by the Kingdom of Ife and then the Benin Empire. Though normally described as "bronzes", the Benin Bronzes, now mostly in the British Museum and other Western collections, and the large portrait heads such as the Bronze Head from Ife of "heavily leaded zinc-brass" and the Bronze Head of Queen Idia, both also British Museum, are better described as brass, though of variable compositions. Work in brass or bronze continued to be important in Benin art and other West African traditions such as Akan goldweights, where the metal was regarded as a more valuable material than in Europe. Renaissance and post-medieval Europe The Renaissance saw important changes to both the theory and practice of brassmaking in Europe. By the 15th century there is evidence for the renewed use of lidded cementation crucibles at Zwickau in Germany. These large crucibles were capable of producing c.20 kg of brass. There are traces of slag and pieces of metal on the interior. Their irregular composition suggests that this was a lower temperature, not entirely liquid, process. The crucible lids had small holes which were blocked with clay plugs near the end of the process presumably to maximize zinc absorption in the final stages. Triangular crucibles were then used to melt the brass for casting. 16th-century technical writers such as Biringuccio, Ercker and Agricola described a variety of cementation brass making techniques and came closer to understanding the true nature of the process noting that copper became heavier as it changed to brass and that it became more golden as additional calamine was added. Zinc metal was also becoming more commonplace. By 1513 metallic zinc ingots from India and China were arriving in London and pellets of zinc condensed in furnace flues at the Rammelsberg in Germany were exploited for cementation brass making from around 1550. Eventually it was discovered that metallic zinc could be alloyed with copper to make brass, a process known as speltering, and by 1657 the German chemist Johann Glauber had recognized that calamine was "nothing else but unmeltable zinc" and that zinc was a "half ripe metal". However some earlier high zinc, low iron brasses such as the 1530 Wightman brass memorial plaque from England may have been made by alloying copper with zinc and include traces of cadmium similar to those found in some zinc ingots from China. However, the cementation process was not abandoned, and as late as the early 19th century there are descriptions of solid-state cementation in a domed furnace at around 900–950 °C and lasting up to 10 hours. The European brass industry continued to flourish into the post medieval period buoyed by innovations such as the 16th century introduction of water powered hammers for the production of wares such as pots. By 1559 the Germany city of Aachen alone was capable of producing 300,000 cwt of brass per year. After several false starts during the 16th and 17th centuries the brass industry was also established in England taking advantage of abundant supplies of cheap copper smelted in the new coal fired reverberatory furnace. In 1723 Bristol brass maker Nehemiah Champion patented the use of granulated copper, produced by pouring molten metal into cold water. This increased the surface area of the copper helping it react and zinc contents of up to 33% wt were reported using this new technique. In 1738 Nehemiah's son William Champion patented a technique for the first industrial scale distillation of metallic zinc known as distillation per descencum or "the English process". This local zinc was used in speltering and allowed greater control over the zinc content of brass and the production of high-zinc copper alloys which would have been difficult or impossible to produce using cementation, for use in expensive objects such as scientific instruments, clocks, brass buttons and costume jewelry. However Champion continued to use the cheaper calamine cementation method to produce lower-zinc brass and the archaeological remains of bee-hive shaped cementation furnaces have been identified at his works at Warmley. By the mid-to-late 18th century developments in cheaper zinc distillation such as John-Jaques Dony's horizontal furnaces in Belgium and the reduction of tariffs on zinc as well as demand for corrosion-resistant high zinc alloys increased the popularity of speltering and as a result cementation was largely abandoned by the mid-19th century. See also Brass bed Brass rubbing List of copper alloys Citations General references Bayley, J. (1990). "The Production of Brass in Antiquity with Particular Reference to Roman Britain". In Craddock, P. T. (ed.). 2000 Years of Zinc and Brass. London: British Museum. Craddock, P. T. and Eckstein, K (2003). "Production of Brass in Antiquity by Direct Reduction". In Craddock, P. T. and Lang, J. (eds.). Mining and Metal Production Through the Ages. London: British Museum. Day, J. (1990). "Brass and Zinc in Europe from the Middle Ages until the 19th century". In Craddock, P. T. (ed.). 2000 Years of Zinc and Brass. London: British Museum. Day, J. (1991). "Copper, Zinc and Brass Production". In Day, J. and Tylecote, R. F. (eds.). The Industrial Revolution in Metals. London: The Institute of Metals. Rehren, T. and Martinon Torres, M. (2008) "Naturam ars imitate: European brassmaking between craft and science". In Martinon-Torres, M. and Rehren, T. (eds.). Archaeology, History and Science: Integrating Approaches to Ancient Material. Left Coast Press. External links Copper alloys History of metallurgy Zinc alloys
Brass
[ "Chemistry", "Materials_science" ]
5,823
[ "Copper alloys", "Metallurgy", "History of metallurgy", "Alloys", "Zinc alloys" ]
3,336
https://en.wikipedia.org/wiki/Brackish%20water
Brackish water, sometimes termed brack water, is water occurring in a natural environment that has more salinity than freshwater, but not as much as seawater. It may result from mixing seawater (salt water) and fresh water together, as in estuaries, or it may occur in brackish fossil aquifers. The word comes from the Middle Dutch root brak. Certain human activities can produce brackish water, in particular civil engineering projects such as dikes and the flooding of coastal marshland to produce brackish water pools for freshwater prawn farming. Brackish water is also the primary waste product of the salinity gradient power process. Because brackish water is hostile to the growth of most terrestrial plant species, without appropriate management it can be damaging to the environment (see article on shrimp farms). Technically, brackish water contains between 0.5 and 30 grams of salt per litre—more often expressed as 0.5 to 30 parts per thousand (‰), which is a specific gravity of between 1.0004 and 1.0226. Thus, brackish covers a range of salinity regimes and is not considered a precisely defined condition. It is characteristic of many brackish surface waters that their salinity can vary considerably over space or time. Water with a salt concentration greater than 30‰ is considered saline. Brackish water habitats Estuaries Brackish water condition commonly occurs when fresh water meets seawater. In fact, the most extensive brackish water habitats worldwide are estuaries, where a river meets the sea. The River Thames flowing through London is a classic river estuary. The town of Teddington a few miles west of London marks the boundary between the tidal and non-tidal parts of the Thames, although it is still considered a freshwater river about as far east as Battersea insofar as the average salinity is very low and the fish fauna consists predominantly of freshwater species such as roach, dace, carp, perch, and pike. The Thames Estuary becomes brackish between Battersea and Gravesend, and the diversity of freshwater fish species present is smaller, primarily roach and dace; euryhaline marine species such as flounder, European seabass, mullet, and smelt become much more common. Further east, the salinity increases and the freshwater fish species are completely replaced by euryhaline marine ones, until the river reaches Gravesend, at which point conditions become fully marine and the fish fauna resembles that of the adjacent North Sea and includes both euryhaline and stenohaline marine species. A similar pattern of replacement can be observed with the aquatic plants and invertebrates living in the river. This type of ecological succession from freshwater to marine ecosystem is typical of river estuaries. River estuaries form important staging points during the migration of anadromous and catadromous fish species, such as salmon, shad and eels, giving them time to form social groups and to adjust to the changes in salinity. Salmon are anadromous, meaning they live in the sea but ascend rivers to spawn; eels are catadromous, living in rivers and streams, but returning to the sea to breed. Besides the species that migrate through estuaries, there are many other fish that use them as "nursery grounds" for spawning or as places young fish can feed and grow before moving elsewhere. Herring and plaice are two commercially important species that use the Thames Estuary for this purpose. Estuaries are also commonly used as fishing grounds and as places for fish farming or ranching. For example, Atlantic salmon farms are often located in estuaries, although this has caused controversy, because in doing so, fish farmers expose migrating wild fish to large numbers of external parasites such as sea lice that escape from the pens the farmed fish are kept in. Mangroves Another important brackish water habitat is the mangrove swamp or mangal. Many, though not all, mangrove swamps fringe estuaries and lagoons where the salinity changes with each tide. Among the most specialised residents of mangrove forests are mudskippers, fish that forage for food on land, and archer fish, perch-like fish that "spit" at insects and other small animals living in the trees, knocking them into the water where they can be eaten. Like estuaries, mangrove swamps are extremely important breeding grounds for many fish, with species such as snappers, halfbeaks, and tarpon spawning or maturing among them. Besides fish, numerous other animals use mangroves, including such species as the saltwater crocodile, American crocodile, proboscis monkey, diamondback terrapin, and the crab-eating frog, Fejervarya cancrivora (formerly Rana cancrivora). Mangroves represent important nesting sites for numerous birds groups such as herons, storks, spoonbills, ibises, kingfishers, shorebirds and seabirds. Although often plagued with mosquitoes and other insects that make them unpleasant for humans, mangrove swamps are very important buffer zones between land and sea, and are a natural defense against hurricane and tsunami damage in particular. The Sundarbans and Bhitarkanika Mangroves are two of the large mangrove forests in the world, both on the coast of the Bay of Bengal. Brackish seas and lakes Some seas and lakes are brackish. The Baltic Sea is a brackish sea adjoining the North Sea. Originally the Eridanos river system prior to the Pleistocene, since then it has been flooded by the North Sea but still receives so much freshwater from the adjacent lands that the water is brackish. As seawater is denser, the water in the Baltic is stratified, with seawater at the bottom and freshwater at the top. Limited mixing occurs because of the lack of tides and storms, with the result that the fish fauna at the surface is freshwater in composition while that lower down is more marine. Cod are an example of a species only found in deep water in the Baltic, while pike are confined to the less saline surface waters. The Caspian Sea is the world's largest lake and contains brackish water with a salinity about one-third that of normal seawater. The Caspian is famous for its peculiar animal fauna, including one of the few non-marine seals (the Caspian seal) and the great sturgeons, a major source of caviar. Hudson Bay is a brackish marginal sea of the Arctic Ocean, it remains brackish due its limited connections to the open ocean, very high levels freshwater surface runoff input from the large Hudson Bay drainage basin, and low rate of evaporation due to being completely covered in ice for over half the year. In the Black Sea the surface water is brackish with an average salinity of about 17–18 parts per thousand compared to 30 to 40 for the oceans. The deep, anoxic water of the Black Sea originates from warm, salty water of the Mediterranean. Lake Texoma, a reservoir on the border between the U.S. states of Texas and Oklahoma, is a rare example of a brackish lake that is neither part of an endorheic basin nor a direct arm of the ocean, though its salinity is considerably lower than that of the other bodies of water mentioned here. The reservoir was created by the damming of the Red River of the South, which (along with several of its tributaries) receives large amounts of salt from natural seepage from buried deposits in the upstream region. The salinity is high enough that striped bass, a fish normally found only in salt water, has self-sustaining populations in the lake. Brackish marsh Other brackish bodies of water Human uses Brackish water is being used by humans in many different sectors. It is commonly used as cooling water for power generation and in a variety of ways in the mining, oil, and gas industries. Once desalinated it can also be used for agriculture, livestock, and municipal uses. Brackish water can be treated using reverse osmosis, electrodialysis, and other filtration processes. See also List of brackish bodies of water References Further reading Moustakas, A. & I. Karakassis. How diverse is aquatic biodiversity research?, Aquatic Ecology, 39, 367-375 Liquid water Aquatic ecology Coastal geography
Brackish water
[ "Biology" ]
1,727
[ "Aquatic ecology", "Ecosystems" ]
3,363
https://en.wikipedia.org/wiki/Beer
Beer is an alcoholic beverage produced by the brewing and fermentation of starches from cereal grain—most commonly malted barley, although wheat, maize (corn), rice, and oats are also used. The grain is mashed to convert starch in the grain to sugars, which dissolve in water to form wort. Fermentation of the wort by yeast produces ethanol and carbonation in the beer. Beer is one of the oldest alcoholic drinks in the world, the most widely consumed, and the third most popular drink after water and tea. Most modern beer is brewed with hops, which add bitterness and other flavours and act as a natural preservative and stabilising agent. Other flavouring agents, such as gruit, herbs, or fruits, may be included or used instead of hops. In commercial brewing, natural carbonation is often replaced with forced carbonation. Beer is distributed in bottles and cans, and is commonly available on draught in pubs and bars. The brewing industry is a global business, consisting of several dominant multinational companies and many thousands of smaller producers ranging from brewpubs to regional breweries. The strength of modern beer is usually around 4% to 6% alcohol by volume (ABV). Some of the earliest writings mention the production and distribution of beer: the Code of Hammurabi (1750 BC) included laws regulating it, while "The Hymn to Ninkasi", a prayer to the Mesopotamian goddess of beer, contains a recipe for it. Beer forms part of the culture of many nations and is associated with social traditions such as beer festivals, as well as activities like pub games. Etymology In early forms of English and in the Scandinavian languages, the usual word for beer was the word whose Modern English form is ale. The modern word beer comes into present-day English from Old English , itself from Common Germanic, it is found throughout the West Germanic and North Germanic dialects (modern Dutch and German , Old Norse ). The earlier etymology of the word is debated: the three main theories are that the word originates in Proto-Germanic (putatively from Proto-Indo-European ), meaning "brewer's yeast, beer dregs"; that it is related to the word barley, or that it was somehow borrowed from Latin "to drink". Christine Fell, in Leeds Studies in English (1975), suggests that the Old English/Norse word bēor did not originally denote ale or beer, but a strong, sweet drink rather like mead or cider. Whatever the case, the meaning of bēor expanded to cover the meaning of ale. When hopped ale from Europe was imported into Britain in the late Middle Ages, it was described as "beer" to differentiate it from the British unhopped ale, later acquiring a broader meaning. History Prehistory Beer is one of the world's oldest prepared alcoholic drinks. The earliest archaeological evidence of fermentation consists of 13,000 year-old residues of a beer with the consistency of gruel, used by the semi-nomadic Natufians for ritual feasting, at the Raqefet Cave in the Carmel Mountains near Haifa in northern Israel. There is evidence that beer was produced at Göbekli Tepe during the Pre-Pottery Neolithic (around 8500 BC to 5500 BC). The earliest clear chemical evidence of beer produced from barley dates to about 3500–3100 BC, from the site of Godin Tepe in the Zagros Mountains of western Iran. Early civilisations Beer is recorded in the written history of ancient Egypt, and archaeologists speculate that beer was instrumental in the formation of civilizations. Approximately 5000 years ago, workers in the city of Uruk (modern day Iraq) were paid by their employers with volumes of beer. During the building of the Egyptian pyramids, each worker got a daily ration of four to five litres of beer, which served as both nutrition and refreshment and was crucial to the pyramids' construction. Some of the earliest Sumerian writings contain references to beer; examples include a prayer to the goddess Ninkasi, known as "The Hymn to Ninkasi", which served as both a prayer and a method of remembering the recipe for beer in a culture with few literate people, and the ancient advice ("Fill your belly. Day and night make merry") to Gilgamesh, recorded in the Epic of Gilgamesh by the alewife Siduri, may, at least in part, have referred to the consumption of beer. The Ebla tablets, discovered in 1974 in Ebla, Syria, show that beer was produced in the city in 2500 BC. A fermented drink using rice and fruit was made in China around 7000 BC. Unlike sake, mould was not used to saccharify the rice (amylolytic fermentation); the rice was probably prepared for fermentation by chewing or malting. During the Vedic period in Ancient India, there are records of the consumption of the beer-like sura. Xenophon noted that during his travels, beer was being produced in Armenia. Medieval Beer was spread through Europe by Germanic and Celtic tribes as far back as 3000 BC, and it was mainly brewed on a domestic scale. The product that the early Europeans drank might not be recognised as beer by most people today. Alongside the basic starch source, the early European beers may have contained fruits, honey, numerous types of plants, spices, and other substances such as narcotic herbs. This mixture was called gruit, where if some were improperly heated could cause hallucinations. The mixture of gruit was different from every brewer. What they did not contain was hops, as that was a later addition, first mentioned in Europe around 822 by a Carolingian Abbot and again in 1067 by abbess Hildegard of Bingen. In 1516, William IV, Duke of Bavaria adopted the Reinheitsgebot (purity law), perhaps the oldest food-quality regulation still in use in the 21st century, according to which the only allowed ingredients of beer are water, hops, and barley-malt. Beer produced before the Industrial Revolution was made and sold on a domestic scale, although by the 7th century AD, beer was also being produced and sold by European monasteries. During the Industrial Revolution, the production of beer moved from artisanal to industrial manufacture, while domestic production ceased to be significant by the end of the 19th century. Modern In 1912, brown bottles began to be used by the Joseph Schlitz Brewing Company of Milwaukee, Wisconsin, in the United States. This innovation has since been accepted worldwide as it prevents light rays from degrading the quality and stability of beer. The brewing industry is a global business, consisting of several dominant multinational companies and many thousands of smaller producers, ranging from brewpubs to regional breweries. As of 2006, more than of beer are sold per year, producing global revenues of US$294.5 billion. In 2010, China's beer consumption hit , or nearly twice that of the United States, but only 5 per cent sold were premium beers, compared with 50 per cent in France and Germany. Beer is the most widely consumed of all alcoholic drinks. A widely publicised study in 2018 suggested that sudden decreases in barley production due to extreme drought and heat could in the future cause substantial volatility in the availability and price of beer. Brewing Process The process of making beer is brewing. It converts the grain into a sugary liquid called wort and then ferments this into beer using yeast. The first step, mixing malted barley with hot water in a mash tun, is "mashing". The starches are converted to sugars, and the sweet wort is drained off. The grains are washed to extract as much fermentable liquid from the grains as possible. The sweet wort is put into a kettle, or "copper", and boiled. Hops are added as a source of bitterness, flavour, and aroma. The longer the hops are boiled, the more bitterness they contribute, but the less hop flavour and aroma remain. The wort is cooled and the yeast is added. The wort is then fermented, often for a week or longer. The yeast settles, leaving the beer clear. During fermentation, most of the carbon dioxide is allowed to escape through a trap. The carbonation is often increased either by transferring the beer to a pressure vessel and introducing pressurised carbon dioxide or by transferring it before the fermentation is finished so that carbon dioxide pressure builds up inside the container. Ingredients The basic ingredients of beer are water; a starch source, usually malted barley; a brewer's yeast to produce the fermentation; and a flavouring such as hops. A mixture of starch sources may be used, with a secondary carbohydrate source, such as maize (corn), rice, wheat, or sugar, often termed an adjunct, especially when used alongside malted barley. Less widely used starch sources include millet, sorghum, and cassava root in Africa; potato in Brazil; and agave in Mexico. Water is the main ingredient, accounting for 93% of beer's weight. The level of dissolved bicarbonate influences beer's finished taste. Due to the mineral properties of each region's water, specific areas were originally the sole producers of certain types of beer, each identifiable by regional characteristics. Dublin's hard water is well-suited to making stout, such as Guinness, while the Plzeň Region's soft water is ideal for brewing Pilsner, such as Pilsner Urquell. The waters of Burton in England contain gypsum, which benefits making pale ale to such a degree that brewers of pale ale add gypsum in a process known as Burtonisation. The starch source provides the fermentable material and determines the strength and flavour of the beer. The most common starch source used in beer is malted grain. Grain is malted by soaking it in water, allowing it to begin germination, and then drying the partially germinated grain in a kiln. Malting produces enzymes that convert starches into fermentable sugars. Different roasting times and temperatures produce different colours of malt from the same grain. Darker malts produce darker beers. Nearly all beers use barley malt for most of the starch, as its fibrous hull remains attached to the grain during threshing. After malting, barley is milled, which finally removes the hull, breaking it into large pieces. These pieces remain with the grain during the mash and act as a filter bed during lautering, when sweet wort is separated from insoluble grain material. Other grains, including wheat, rice, oats, and rye, and less frequently, corn and sorghum may be used. Some brewers have produced gluten-free beer, made with sorghum, for those who cannot consume gluten-containing grains like wheat, barley, and rye. Flavouring beer is the sole commercial use of hops. The flower of the hop vine acts as a flavouring and preservative agent in nearly all beer made today. The flowers themselves are often called "hops". The first historical mention of the use of hops in beer dates from 822 AD in monastery rules written by Adalard of Corbie, though widespread cultivation of hops for use in beer began in the thirteenth century. Before then, beer was flavoured with other plants such as grains of paradise or 'alehoof'. Combinations of aromatic herbs, berries, and even wormwood were combined into aflavouring mixture known as gruit. Some beers today, such as Fraoch' by the Scottish Heather Ales company use plants other than hops for flavouring. and Cervoise Lancelot by the French Brasserie-Lancelot company, Hops contribute a bitterness that balances the sweetness of the malt; the bitterness of beers is measured on the International Bitterness Units scale. Hops further contribute floral, citrus, and herbal aromas and flavours. They have an antibiotic effect that favours the activity of brewer's yeast over less desirable microorganisms, and aids in "head retention", the length of time that a foamy head created by carbonation will last. The acidity of hops is a preservative. Yeast is the microorganism responsible for fermenting beer. It metabolises the sugars, producing ethanol and carbon dioxide, and thereby turns wort into beer. In addition, yeast influences the character and flavour. The dominant types of beer yeast are top-fermenting Saccharomyces cerevisiae and bottom-fermenting Saccharomyces pastorianus. Brettanomyces ferments lambics, and Torulaspora delbrueckii ferments Bavarian weissbier. Before the role of yeast in fermentation was understood, fermentation involved wild or airborne yeasts. A few styles, such as lambics, rely on this method today, but most modern fermentation adds pure yeast cultures. Some brewers add clarifying agents or finings to beer, which typically precipitate (collect as a solid) out along with protein solids, and are found only in trace amounts in the finished product. This process makes the beer appear bright and clean, rather than the cloudy appearance of ethnic and older styles such as wheat beers. Clarifying agents include isinglass, from the swimbladders of fish; Irish moss, a seaweed; kappa carrageenan, from the seaweed Kappaphycus cottonii; Polyclar (artificial); and gelatin. Beer marked "suitable for vegans" is clarified either with seaweed or with artificial agents. Industry In the 21st century, larger breweries have repeatedly absorbed smaller breweries. In 2002, South African Breweries bought the North American Miller Brewing Company to found SABMiller, becoming the second-largest brewery after North American Anheuser-Busch. In 2004, the Belgian Interbrew was the third-largest brewery by volume, and the Brazilian AmBev was the fifth-largest. They merged into InBev, becoming the largest brewery. In 2007, SABMiller surpassed InBev and Anheuser-Busch when it acquired Royal Grolsch, the brewer of Dutch brand Grolsch. In 2008, when InBev (the second-largest) bought Anheuser-Busch (the third-largest), the new Anheuser-Busch InBev company became again the largest brewer in the world. , according to the market research firm Technavio, AB InBev was the largest brewing company in the world, with Heineken second, CR Snow third, Carlsberg fourth, and Molson Coors fifth. A microbrewery, or craft brewery, produces a limited amount of beer. The maximum amount of beer a brewery can produce and still be classed as a 'microbrewery' varies by region and by authority; in the US, it is a year. A brewpub is a type of microbrewery that incorporates a pub or other drinking establishment. The highest density of breweries in the world, most of them microbreweries, exists in Franconia, Germany, especially in the district of Upper Franconia, which has about 200 breweries. The Benedictine Weihenstephan brewery in Bavaria, Germany, can trace its roots to the year 768, as a document from that year refers to a hop garden in the area paying a tithe to the monastery. It claims to be the oldest working brewery in the world. Varieties Top-fermented beers Top-fermented beers are most commonly produced with Saccharomyces cerevisiae, a top-fermenting yeast which clumps and rises to the surface, typically between . At these temperatures, yeast produces significant amounts of esters and other secondary flavour and aroma products, and the result is often a beer with slightly "fruity" compounds resembling apple, pear, pineapple, banana, plum, or prune, among others. After the introduction of hops into England from Flanders in the 15th century, "ale" came to mean an unhopped fermented brew, while "beer" meant a brew with an infusion of hops. The term 'real ale' was coined by the Campaign for Real Ale (CAMRA) in 1973 for "beer brewed from traditional ingredients, matured by secondary fermentation in the container from which it is dispensed, and served without the use of extraneous carbon dioxide". It is applied to both bottle conditioned and cask conditioned beers. As for the types of top-fermented beers, pale ale predominantly uses pale malt. It is one of the world's major beer styles and includes India pale ale (IPA). Mild ale has a predominantly malty palate. It is usually dark, with an abv of 3% to 3.6%. Wheat beer is brewed with a large proportion of wheat although it often also contains a significant proportion of malted barley. Wheat beers are usually top-fermented. Stout is a dark beer made using roasted barley, and typically brewed with slow fermenting yeast. There are a number of variations including dry stout (such as Guinness), sweet stout, and Imperial (or Russian) stout. Stout was originally the strongest variety of porter, a dark brown beer popular with the street and river porters of eighteenth century London. Bottom-fermented beers Lager is cool-fermented beer. Pale lagers are the most commonly drunk beers in the world. Many are of the "pilsner" type. The name "lager" comes from the German "lagern" for "to store", as brewers in Bavaria stored beer in cool cellars during the warm summer months, allowing the beers to continue to ferment, and to clear any sediment. Lager yeast is a cool bottom-fermenting yeast (Saccharomyces pastorianus). Lager typically undergoes primary fermentation at , and then a long secondary fermentation at (the lagering phase). During the secondary stage, the lager clears and mellows. The cooler conditions inhibit the natural production of esters and other byproducts, resulting in a "cleaner"-tasting beer. With improved modern yeast strains, most lager breweries use only short periods of cold storage, typically no more than 2 weeks. Some traditional lagers are still stored for several months. Lambic Lambic, a beer of Belgium, is naturally fermented using wild yeasts, rather than cultivated. Many of these are not strains of brewer's yeast (Saccharomyces cerevisiae) and may have significant differences in aroma and sourness. Yeast varieties such as Brettanomyces bruxellensis and Brettanomyces lambicus are common in lambics. In addition, other organisms such as Lactobacillus bacteria produce acids which contribute to the sourness. Non-barley beers Around the world, many traditional and ancient starch-based drinks are classed as beer. In Africa, there are ethnic beers made from sorghum or millet, such as Oshikundu in Namibia and Tella in Ethiopia. Kyrgyzstan also has a beer made from millet; it is a low alcohol, somewhat porridge-like drink called "Bozo". Bhutan, Nepal, Tibet and Sikkim also use millet in Chhaang, a popular semi-fermented rice/millet drink in the eastern Himalayas. The Andes in South America has Chicha, made from germinated maize (corn); while the indigenous peoples in Brazil have Cauim, a traditional drink made since pre-Columbian times by chewing manioc so that an enzyme (amylase) present in human saliva can break down the starch into fermentable sugars; this is similar to Masato in Peru. Beers made from bread, among the earliest forms of the drink, are Sahti in Finland, Kvass in Russia and Ukraine, and Bouza in Sudan. 4000 years ago fermented bread was used in Mesopotamia. Food waste activists got inspired by these ancient recipes and use leftover bread to replace a third of the malted barley that would otherwise be used for brewing their craft ale. Measurement Beer is measured and assessed by colour, by strength and by bitterness. The strength of modern beer is usually around 4% to 6%, measured as alcohol by volume (ABV). The perceived bitterness is measured by the International Bitterness Units scale (IBU), defined in co-operation between the American Society of Brewing Chemists and the European Brewery Convention. The international scale was a development of the European Bitterness Units scale, often abbreviated as EBU, and the bitterness values should be identical. Colour Beer colour is determined by the malt. The most common colour is a pale amber produced from using pale malts. Pale lager and pale ale are terms used for beers made from malt dried and roasted with the fuel coke. Coke was first used for roasting malt in 1642, but it was not until around 1703 that the term pale ale was used. In terms of sales volume, most of today's beer is based on the pale lager brewed in 1842 in the city of Plzeň in the present-day Czech Republic. The modern pale lager is light in colour due to use of coke for kilning, which gives off heat with little smoke. Dark beers are usually brewed from a pale malt or lager malt base with a small proportion of darker malt added to achieve the desired shade. Other colourants—such as caramel—are also widely used to darken beers. Very dark beers, such as stout, use dark or patent malts that have been roasted longer. Some have roasted unmalted barley. Strength Beer ranges from less than 3% alcohol by volume (abv) to around 14% abv, though this strength can be increased to around 20% by re-pitching with champagne yeast, and to 55% ABVby the freeze-distilling process. The alcohol content of beer varies by local practice or beer style. The pale lagers that most consumers are familiar with fall in the range of 4–6%, with a typical ABVof 5%. The customary strength of British ales is quite low, with many session beers being around 4% abv. In Belgium, some beers, such as table beer are of such low alcohol content (1%–4%) that they are served instead of soft drinks in some schools. The weakest beers are described as 'alcohol-free', typically containing 0.05% ABV; this compares to low alcohol beers which may contain 1.2% ABV or less, and conventional beers which average 4.4% ABV. The strength of beers has climbed during the later years of the 20th century. Vetter 33, a 10.5% ABV (33 degrees Plato, hence Vetter "33") doppelbock, was listed in the 1994 Guinness Book of World Records as the strongest beer at that time, though Samichlaus, by the Swiss brewer Hürlimann, had also been listed by the Guinness Book of World Records as the strongest at 14% |BV. Since then, some brewers have used champagne yeasts to increase the alcohol content of their beers. Samuel Adams reached 20% ABVwith Millennium, and then surpassed that amount to 25.6% ABV with Utopias. The strongest beer brewed in Britain was Baz's Super Brew by Parish Brewery, a 23% ABVbeer. In September 2011, the Scottish brewery BrewDog produced Ghost Deer, which, at 28%, they claim to be the world's strongest beer produced by fermentation alone. The product claimed to be the strongest beer made is Schorschbräu's 2011 Schorschbock 57 with 57,5% ABV. It was preceded by The End of History, a 55% Belgian ale, made by BrewDog in 2010. The same company had previously made Sink The Bismarck!, a 41% ABV IPA, and Tactical Nuclear Penguin, a 32% ABV Imperial stout. Each of these beers are made using the eisbock method of fractional freezing, in which a strong ale is partially frozen and the ice is repeatedly removed, until the desired strength is reached, a process that may class the product as spirits rather than beer. The German brewery Schorschbräu's Schorschbock, a 31% ABV eisbock, and Hair of the Dog's Dave, a 29% abv barley wine made in 1994, used the same fractional freezing method. A 60% ABV blend of beer with whiskey was jokingly claimed as the strongest beer by a Dutch brewery in July 2010. Serving Draught Draught (also spelled "draft") beer from a pressurised keg using a lever-style dispenser and a spout is the most common method of dispensing in bars around the world. A metal keg is pressurised with carbon dioxide (CO2) gas which drives the beer to the dispensing tap or faucet. Some beers may be served with a nitrogen/carbon dioxide mixture. Nitrogen produces fine bubbles, resulting in a dense head and a creamy mouthfeel. In the 1980s, Guinness introduced the beer widget, a nitrogen-pressurised ball inside a can which creates a moderately dense, tight head. This approximates the effect of serving from a keg, at least for a British-style beer which does not have a specially large head. Cask-conditioned ales (or cask ales) are unfiltered and unpasteurised beers. These beers are termed "real ale" by the CAMRA organisation. When a cask arrives in a pub, it is placed horizontally on a "stillage" frame, designed to hold it steady and at the right angle, and then allowed to cool to cellar temperature (typically between ), before being tapped and vented—a tap is driven through a rubber bung at the bottom of one end, and a hard spile is used to open a hole in the uppermost side of the cask. The act of stillaging and then venting a beer in this manner typically disturbs all the sediment, so it must be left for a suitable period of hours to days to "drop" (clear) again, as well as to fully condition the beer. At this point the beer is ready to sell, either being pulled through a beer line with a hand pump, or simply being "gravity-fed" directly into the glass. Draught beer's environmental impact can be 68% lower than bottled beer due to packaging differences. A life cycle study of one beer brand, including grain production, brewing, bottling, distribution and waste management, shows that the CO2 emissions from a 6-pack of micro-brew beer is about 3 kilograms (6.6 pounds). The loss of natural habitat potential from the 6-pack of micro-brew beer is estimated to be 2.5 square metres (26 square feet). Downstream emissions from distribution, retail, storage and disposal of waste can be over 45% of a bottled micro-brew beer's CO2 emissions. Where legal, the use of a refillable jug, reusable bottle or other reusable containers to transport draught beer from a store or a bar, rather than buying pre-bottled beer, can reduce the environmental impact of beer consumption. Packaging Most beers are cleared of yeast by filtering when packaged in bottles and cans. However, bottle conditioned beers retain some yeast—either by being unfiltered, or by being filtered and then reseeded with fresh yeast. Many beers are sold in cans, though there is considerable variation in the proportion between different countries. In Sweden in 2001, 63.9% of beer was sold in cans. People either drink from the can or pour the beer into a glass. A technology developed by Crown Holdings for the 2010 FIFA World Cup is the 'full aperture' can, so named because the entire lid is removed during the opening process, turning the can into a drinking cup. Cans protect the beer from light (thereby preventing spoilage) and have a seal less prone to leaking over time than bottles. Cans were initially viewed as a technological breakthrough for maintaining the quality of a beer, then became commonly associated with less expensive, mass-produced beers, even though the quality of storage in cans is much like bottles. Plastic (PET) bottles are used by some breweries. Temperature The temperature of a beer has an influence on a drinker's experience; warmer temperatures reveal the range of flavours in a beer but cooler temperatures are more refreshing. Most drinkers prefer pale lager to be served chilled, a low- or medium-strength pale ale to be served cool, while a strong barley wine or imperial stout to be served at room temperature. Beer writer Michael Jackson proposed a five-level scale for serving temperatures: well chilled () for "light" beers (pale lagers); chilled () for Berliner Weisse and other wheat beers; lightly chilled () for all dark lagers, altbier and German wheat beers; cellar temperature () for regular British ale, stout and most Belgian specialities; and room temperature () for strong dark ales (especially trappist beer) and barley wine. Drinking chilled beer began with the development of artificial refrigeration and by the 1870s, was spread in those countries that concentrated on brewing pale lager. Chilling beer makes it more refreshing, though below 15.5 °C (60 °F) the chilling starts to reduce taste awareness and reduces it significantly below . Beer served unchilled—either cool or at room temperature—reveal more of their flavours. Cask Marque, a non-profit UK beer organisation, has set a temperature standard range of 12°–14 °C (53°–57 °F) for cask ales to be served. Vessels Beer is consumed out of a variety of vessels, such as a glass, a beer stein, a mug, a pewter tankard, a beer bottle or a can; or at music festivals and some bars and nightclubs, from a plastic cup. The shape of the glass from which beer is consumed can influence the perception of the beer and can define and accent the character of the style. Breweries offer branded glassware intended only for their own beers as a marketing promotion, as this increases sales of their product. The pouring process has an influence on a beer's presentation. The rate of flow from the tap or other serving vessel, tilt of the glass, and position of the pour (in the centre or down the side) into the glass all influence the result, such as the size and longevity of the head, lacing (the pattern left by the head as it moves down the glass as the beer is drunk), and the release of carbonation. A beer tower or portable beer tap is sometimes used in bars and restaurants to allow a group of customers to serve themselves. The device consists of a tall container with a cooling mechanism and a beer tap at its base. Chemistry Beer contains the phenolic acids 4-hydroxyphenylacetic acid, vanillic acid, caffeic acid, syringic acid, p-coumaric acid, ferulic acid, and sinapic acid. Alkaline hydrolysis experiments show that most of the phenolic acids are present as bound forms and only a small portion can be detected as free compounds. Hops, and beer made with it, contain 8-prenylnaringenin which is a potent phytoestrogen. Hop also contains myrcene, humulene, xanthohumol, isoxanthohumol, myrcenol, linalool, tannins, and resin. The alcohol 2M2B is a component of hops brewing. Barley, in the form of malt, brings the condensed tannins prodelphinidins B3, B9 and C2 into beer. Tryptophol, tyrosol, and phenylethanol are aromatic higher alcohols (congeners) produced by yeast during the brewing process. as secondary products of alcoholic fermentation Nutrition Beers vary in their nutritional content. The ingredients used to make beer, including the yeast, provide a rich source of nutrients; therefore beer may contain nutrients including magnesium, selenium, potassium, phosphorus, biotin, chromium and B vitamins. Beer is sometimes referred to as "liquid bread", though beer is not a meal in itself. Health effects A 2016 systematic review and meta-analysis found that moderate ethanol consumption brought no mortality benefit compared with lifetime abstention from ethanol consumption. Some studies have concluded that drinking small quantities of alcohol (less than one drink in women and two in men, per day) is associated with a decreased risk of heart disease, stroke, diabetes mellitus, and early death. Some of these studies combined former ethanol drinkers and lifelong abstainers into a single group of nondrinkers, which hides the health benefits of lifelong abstention from ethanol. The long-term health effects of continuous, moderate or heavy alcohol consumption include the risk of developing alcoholism and alcoholic liver disease. Alcoholism, also known as "alcohol use disorder", is a broad term for any drinking of alcohol that results in problems. It was previously divided into two types: alcohol abuse and alcohol dependence. In a medical context, alcoholism is said to exist when two or more of the following conditions are present: a person drinks large amounts over a long time period, has difficulty cutting down, acquiring and drinking alcohol takes up a great deal of time, alcohol is strongly desired, usage results in not fulfilling responsibilities, usage results in social problems, usage results in health problems, usage results in risky situations, withdrawal occurs when stopping, and alcohol tolerance has occurred with use. Alcoholism reduces a person's life expectancy by around ten years and alcohol use is the third leading cause of early death in the United States. No professional medical association recommends that people who are nondrinkers should start drinking alcoholic beverages. In the United States, a total of 3.3 million deaths per year (5.9% of all deaths) are believed to be due to alcohol. Overeating and lack of muscle tone is the main cause of a beer belly, rather than beer consumption, though a 2004 study found a link between binge drinking and a beer belly. Several diet books quote beer as having an undesirably high glycemic index of 110, the same as maltose; however, the maltose in beer undergoes metabolism by yeast during fermentation so that beer consists mostly of water, hop oils and only trace amounts of sugars, including maltose. The multi-step process of beer production is effective at removing pesticide residues from grain. At each step (e.g. mashing or malting) pesticide levels are typically reduced by 50-90%, varying with the particular process and pesticide's chemical properties. A 2013 study found that the flavour of beer alone could provoke dopamine activity in the brain of the male participants, who wanted to drink more as a result. The 49 men in the study were subject to positron emission tomography scans, while a computer-controlled device sprayed minute amounts of beer, water and a sports drink onto their tongues. Compared with the taste of the sports drink, the taste of beer significantly increased the participants desire to drink. Test results indicated that the flavour of the beer triggered a dopamine release, even though alcohol content in the spray was insufficient for the purpose of becoming intoxicated. Society and culture Some of the earliest writings mention the production and distribution of beer: the 1750 BC Babylonian Code of Hammurabi included laws regulating it, while "The Hymn to Ninkasi", a 1800 BC prayer to the Mesopotamian goddess of beer, a recipe for it. In many societies, beer is the most popular alcoholic drink. Various social traditions and activities are associated with beer drinking, such as playing cards, darts, or other pub games; attending beer festivals; engaging in zythology (the study of beer); visiting a series of pubs in one evening; visiting breweries; beer-oriented tourism; or rating beer. Drinking games, such as beer pong, accompany the drinking of beer. Even having a "shower beer" has developed a following. A relatively new profession is that of the beer sommelier, who informs restaurant patrons about beers and food pairings. Some breweries have developed beers to pair with food. Wine writer Malcolm Gluck disputed the need to pair beer with food, while beer writers Roger Protz and Melissa Cole contested that claim. Beer is considered to be a social lubricant, and is consumed in countries all over the world. There are breweries in Middle Eastern countries such as Syria, and in some African countries. Sales of beer are four times those of wine, which is the second most popular alcoholic drink. See also References Bibliography Further reading External links Beer Brewing Alcoholic drinks Fermented drinks
Beer
[ "Biology" ]
7,743
[ "Fermented drinks", "Biotechnology products" ]
3,364
https://en.wikipedia.org/wiki/Bit
The bit is the most basic unit of information in computing and digital communication. The name is a portmanteau of binary digit. The bit represents a logical state with one of two possible values. These values are most commonly represented as either , but other representations such as true/false, yes/no, on/off, or +/− are also widely used. The relation between these values and the physical states of the underlying storage or device is a matter of convention, and different assignments may be used even within the same device or program. It may be physically implemented with a two-state device. A contiguous group of binary digits is commonly called a bit string, a bit vector, or a single-dimensional (or multi-dimensional) bit array. A group of eight bits is called one byte, but historically the size of the byte is not strictly defined. Frequently, half, full, double and quadruple words consist of a number of bytes which is a low power of two. A string of four bits is usually a nibble. In information theory, one bit is the information entropy of a random binary variable that is 0 or 1 with equal probability, or the information that is gained when the value of such a variable becomes known. As a unit of information or negentropy, the bit is also known as a shannon, named after Claude E. Shannon. As a measure of the length of a digital string that is encoded as symbols over a 0-1 (binary) alphabet, the bit has been called a binit, but this usage is now rare. In data compression, the goal is to find a shorter representation for a string, so that it requires fewer bits of storage -- but it must be "compressed" before storage and then (generally) "decompressed" before it is used in a computation. The field of Algorithmic Information Theory is devoted to the study of the "irreducible information content" of a string (i.e. its shortest-possible representation length, in bits), under the assumption that the receiver has minimal a priori knowledge of the method used to compress the string. The symbol for the binary digit is either "bit", per the IEC 80000-13:2008 standard, or the lowercase character "b", per the IEEE 1541-2002 standard. Use of the latter may create confusion with the capital "B" which is the international standard symbol for the byte. History Ralph Hartley suggested the use of a logarithmic measure of information in 1928. Claude E. Shannon first used the word "bit" in his seminal 1948 paper "A Mathematical Theory of Communication". He attributed its origin to John W. Tukey, who had written a Bell Labs memo on 9 January 1947 in which he contracted "binary information digit" to simply "bit". Physical representation A bit can be stored by a digital device or other physical system that exists in either of two possible distinct states. These may be the two stable states of a flip-flop, two positions of an electrical switch, two distinct voltage or current levels allowed by a circuit, two distinct levels of light intensity, two directions of magnetization or polarization, the orientation of reversible double stranded DNA, etc. Perhaps the earliest example of a binary storage device was the punched card invented by Basile Bouchon and Jean-Baptiste Falcon (1732), developed by Joseph Marie Jacquard (1804), and later adopted by Semyon Korsakov, Charles Babbage, Herman Hollerith, and early computer manufacturers like IBM. A variant of that idea was the perforated paper tape. In all those systems, the medium (card or tape) conceptually carried an array of hole positions; each position could be either punched through or not, thus carrying one bit of information. The encoding of text by bits was also used in Morse code (1844) and early digital communications machines such as teletypes and stock ticker machines (1870). The first electrical devices for discrete logic (such as elevator and traffic light control circuits, telephone switches, and Konrad Zuse's computer) represented bits as the states of electrical relays which could be either "open" or "closed". When relays were replaced by vacuum tubes, starting in the 1940s, computer builders experimented with a variety of storage methods, such as pressure pulses traveling down a mercury delay line, charges stored on the inside surface of a cathode-ray tube, or opaque spots printed on glass discs by photolithographic techniques. In the 1950s and 1960s, these methods were largely supplanted by magnetic storage devices such as magnetic-core memory, magnetic tapes, drums, and disks, where a bit was represented by the polarity of magnetization of a certain area of a ferromagnetic film, or by a change in polarity from one direction to the other. The same principle was later used in the magnetic bubble memory developed in the 1980s, and is still found in various magnetic strip items such as metro tickets and some credit cards. In modern semiconductor memory, such as dynamic random-access memory or a solid-state drive, the two values of a bit are represented by two levels of electric charge stored in a capacitor or a floating-gate MOSFET. In certain types of programmable logic arrays and read-only memory, a bit may be represented by the presence or absence of a conducting path at a certain point of a circuit. In optical discs, a bit is encoded as the presence or absence of a microscopic pit on a reflective surface. In one-dimensional bar codes and two-dimensional QR codes, bits are encoded as lines or squares which may be either black or white. In modern digital computing, bits are transformed in Boolean logic gates. Transmission and processing Bits are transmitted one at a time in serial transmission. By contrast, multiple bits are transmitted simultaneously in a parallel transmission. A serial computer processes information in either a bit-serial or a byte-serial fashion. From the standpoint of data communications, a byte-serial transmission is an 8-way parallel transmission with binary signalling. In programming languages such as C, a bitwise operation operates on binary strings as though they are vectors of bits, rather than interpreting them as binary numbers. Data transfer rates are usually measured in decimal SI multiples. For example, a channel capacity may be specified as 8 kbit/s = 8 kb/s = 1 kB/s. Storage File sizes are often measured in (binary) IEC multiples of bytes, for example 1 KiB = 1024 bytes = 8192 bits. Confusion may arise in cases where (for historic reasons) filesizes are specified with binary multipliers using the ambiguous prefixes K, M, and G rather than the IEC standard prefixes Ki, Mi, and Gi. Mass storage devices are usually measured in decimal SI multiples, for example 1 TB = bytes. Confusingly, the storage capacity of a directly-addressable memory device, such as a DRAM chip, or an assemblage of such chips on a memory module, is specified as a binary multiple -- using the ambiguous prefix G rather than the IEC recommended Gi prefix. For example, a DRAM chip that is specified (and advertised) as having "1 GB" of capacity has bytes of capacity. As at 2022, the difference between the popular understanding of a memory system with "8 GB" of capacity, and the SI-correct meaning of "8 GB" was still causing difficulty to software designers. Unit and symbol The bit is not defined in the International System of Units (SI). However, the International Electrotechnical Commission issued standard IEC 60027, which specifies that the symbol for binary digit should be 'bit', and this should be used in all multiples, such as 'kbit', for kilobit. However, the lower-case letter 'b' is widely used as well and was recommended by the IEEE 1541 Standard (2002). In contrast, the upper case letter 'B' is the standard and customary symbol for byte. Multiple bits Multiple bits may be expressed and represented in several ways. For convenience of representing commonly reoccurring groups of bits in information technology, several units of information have traditionally been used. The most common is the unit byte, coined by Werner Buchholz in June 1956, which historically was used to represent the group of bits used to encode a single character of text (until UTF-8 multibyte encoding took over) in a computer and for this reason it was used as the basic addressable element in many computer architectures. By 1993, the trend in hardware design had converged on the 8-bit byte. However, because of the ambiguity of relying on the underlying hardware design, the unit octet was defined to explicitly denote a sequence of eight bits. Computers usually manipulate bits in groups of a fixed size, conventionally named "words". Like the byte, the number of bits in a word also varies with the hardware design, and is typically between 8 and 80 bits, or even more in some specialized computers. In the early 21st century, retail personal or server computers have a word size of 32 or 64 bits. The International System of Units defines a series of decimal prefixes for multiples of standardized units which are commonly also used with the bit and the byte. The prefixes kilo (103) through yotta (1024) increment by multiples of one thousand, and the corresponding units are the kilobit (kbit) through the yottabit (Ybit). See also (quantum bit) (Trinary digit) References External links Bit Calculator – a tool providing conversions between bit, byte, kilobit, kilobyte, megabit, megabyte, gigabit, gigabyte BitXByteConverter – a tool for computing file sizes, storage capacity, and digital information in various units Binary arithmetic Primitive types Data types Units of information
Bit
[ "Mathematics" ]
2,053
[ "Quantity", "Arithmetic", "Units of information", "Binary arithmetic", "Units of measurement" ]
3,365
https://en.wikipedia.org/wiki/Byte
The byte is a unit of digital information that most commonly consists of eight bits. 1 byte (B) = 8 bits (bit). Historically, the byte was the number of bits used to encode a single character of text in a computer and for this reason it is the smallest addressable unit of memory in many computer architectures. To disambiguate arbitrarily sized bytes from the common 8-bit definition, network protocol documents such as the Internet Protocol () refer to an 8-bit byte as an octet. Those bits in an octet are usually counted with numbering from 0 to 7 or 7 to 0 depending on the bit endianness. The size of the byte has historically been hardware-dependent and no definitive standards existed that mandated the size. Sizes from 1 to 48 bits have been used. The six-bit character code was an often-used implementation in early encoding systems, and computers using six-bit and nine-bit bytes were common in the 1960s. These systems often had memory words of 12, 18, 24, 30, 36, 48, or 60 bits, corresponding to 2, 3, 4, 5, 6, 8, or 10 six-bit bytes, and persisted, in legacy systems, into the twenty-first century. In this era, bit groupings in the instruction stream were often referred to as syllables or slab, before the term byte became common. The modern de facto standard of eight bits, as documented in ISO/IEC 2382-1:1993, is a convenient power of two permitting the binary-encoded values 0 through 255 for one byte, as 2 to the power of 8 is 256. The international standard IEC 80000-13 codified this common meaning. Many types of applications use information representable in eight or fewer bits and processor designers commonly optimize for this usage. The popularity of major commercial computing architectures has aided in the ubiquitous acceptance of the 8-bit byte. Modern architectures typically use 32- or 64-bit words, built of four or eight bytes, respectively. The unit symbol for the byte was designated as the upper-case letter B by the International Electrotechnical Commission (IEC) and Institute of Electrical and Electronics Engineers (IEEE). Internationally, the unit octet explicitly defines a sequence of eight bits, eliminating the potential ambiguity of the term "byte". The symbol for octet, 'o', also conveniently eliminates the ambiguity in the symbol 'B' between byte and bel. Etymology and history The term byte was coined by Werner Buchholz in June 1956, during the early design phase for the IBM Stretch computer, which had addressing to the bit and variable field length (VFL) instructions with a byte size encoded in the instruction. It is a deliberate respelling of bite to avoid accidental mutation to bit. Another origin of byte for bit groups smaller than a computer's word size, and in particular groups of four bits, is on record by Louis G. Dooley, who claimed he coined the term while working with Jules Schwartz and Dick Beeler on an air defense system called SAGE at MIT Lincoln Laboratory in 1956 or 1957, which was jointly developed by Rand, MIT, and IBM. Later on, Schwartz's language JOVIAL actually used the term, but the author recalled vaguely that it was derived from AN/FSQ-31. Early computers used a variety of four-bit binary-coded decimal (BCD) representations and the six-bit codes for printable graphic patterns common in the U.S. Army (FIELDATA) and Navy. These representations included alphanumeric characters and special graphical symbols. These sets were expanded in 1963 to seven bits of coding, called the American Standard Code for Information Interchange (ASCII) as the Federal Information Processing Standard, which replaced the incompatible teleprinter codes in use by different branches of the U.S. government and universities during the 1960s. ASCII included the distinction of upper- and lowercase alphabets and a set of control characters to facilitate the transmission of written language as well as printing device functions, such as page advance and line feed, and the physical or logical control of data flow over the transmission media. During the early 1960s, while also active in ASCII standardization, IBM simultaneously introduced in its product line of System/360 the eight-bit Extended Binary Coded Decimal Interchange Code (EBCDIC), an expansion of their six-bit binary-coded decimal (BCDIC) representations used in earlier card punches. The prominence of the System/360 led to the ubiquitous adoption of the eight-bit storage size, while in detail the EBCDIC and ASCII encoding schemes are different. In the early 1960s, AT&T introduced digital telephony on long-distance trunk lines. These used the eight-bit μ-law encoding. This large investment promised to reduce transmission costs for eight-bit data. In Volume 1 of The Art of Computer Programming (first published in 1968), Donald Knuth uses byte in his hypothetical MIX computer to denote a unit which "contains an unspecified amount of information ... capable of holding at least 64 distinct values ... at most 100 distinct values. On a binary computer a byte must therefore be composed of six bits". He notes that "Since 1975 or so, the word byte has come to mean a sequence of precisely eight binary digits...When we speak of bytes in connection with MIX we shall confine ourselves to the former sense of the word, harking back to the days when bytes were not yet standardized." The development of eight-bit microprocessors in the 1970s popularized this storage size. Microprocessors such as the Intel 8080, the direct predecessor of the 8086, could also perform a small number of operations on the four-bit pairs in a byte, such as the decimal-add-adjust (DAA) instruction. A four-bit quantity is often called a nibble, also nybble, which is conveniently represented by a single hexadecimal digit. The term octet unambiguously specifies a size of eight bits. It is used extensively in protocol definitions. Historically, the term octad or octade was used to denote eight bits as well at least in Western Europe; however, this usage is no longer common. The exact origin of the term is unclear, but it can be found in British, Dutch, and German sources of the 1960s and 1970s, and throughout the documentation of Philips mainframe computers. Unit symbol The unit symbol for the byte is specified in IEC 80000-13, IEEE 1541 and the Metric Interchange Format as the upper-case character B. In the International System of Quantities (ISQ), B is also the symbol of the bel, a unit of logarithmic power ratio named after Alexander Graham Bell, creating a conflict with the IEC specification. However, little danger of confusion exists, because the bel is a rarely used unit. It is used primarily in its decadic fraction, the decibel (dB), for signal strength and sound pressure level measurements, while a unit for one-tenth of a byte, the decibyte, and other fractions, are only used in derived units, such as transmission rates. The lowercase letter o for octet is defined as the symbol for octet in IEC 80000-13 and is commonly used in languages such as French and Romanian, and is also combined with metric prefixes for multiples, for example ko and Mo. Multiple-byte units More than one system exists to define unit multiples based on the byte. Some systems are based on powers of 10, following the International System of Units (SI), which defines for example the prefix kilo as 1000 (103); other systems are based on powers of two. Nomenclature for these systems has led to confusion. Systems based on powers of 10 use standard SI prefixes (kilo, mega, giga, ...) and their corresponding symbols (k, M, G, ...). Systems based on powers of 2, however, might use binary prefixes (kibi, mebi, gibi, ...) and their corresponding symbols (Ki, Mi, Gi, ...) or they might use the prefixes K, M, and G, creating ambiguity when the prefixes M or G are used. While the difference between the decimal and binary interpretations is relatively small for the kilobyte (about 2% smaller than the kibibyte), the systems deviate increasingly as units grow larger (the relative deviation grows by 2.4% for each three orders of magnitude). For example, a power-of-10-based terabyte is about 9% smaller than power-of-2-based tebibyte. Units based on powers of 10 Definition of prefixes using powers of 10—in which 1 kilobyte (symbol kB) is defined to equal 1,000 bytes—is recommended by the International Electrotechnical Commission (IEC). The IEC standard defines eight such multiples, up to 1 yottabyte (YB), equal to 10008 bytes. The additional prefixes ronna- for 10009 and quetta- for 100010 were adopted by the International Bureau of Weights and Measures (BIPM) in 2022. This definition is most commonly used for data-rate units in computer networks, internal bus, hard drive and flash media transfer speeds, and for the capacities of most storage media, particularly hard drives, flash-based storage, and DVDs. Operating systems that use this definition include macOS, iOS, Ubuntu, and Debian. It is also consistent with the other uses of the SI prefixes in computing, such as CPU clock speeds or measures of performance. Units based on powers of 2 A system of units based on powers of 2 in which 1 kibibyte (KiB) is equal to 1,024 (i.e., 210) bytes is defined by international standard IEC 80000-13 and is supported by national and international standards bodies (BIPM, IEC, NIST). The IEC standard defines eight such multiples, up to 1 yobibyte (YiB), equal to 10248 bytes. The natural binary counterparts to ronna- and quetta- were given in a consultation paper of the International Committee for Weights and Measures' Consultative Committee for Units (CCU) as robi- (Ri, 10249) and quebi- (Qi, 102410), but have not yet been adopted by the IEC or ISO. An alternative system of nomenclature for the same units (referred to here as the customary convention), in which 1 kilobyte (KB) is equal to 1,024 bytes, 1 megabyte (MB) is equal to 10242 bytes and 1 gigabyte (GB) is equal to 10243 bytes is mentioned by a 1990s JEDEC standard. Only the first three multiples (up to GB) are mentioned by the JEDEC standard, which makes no mention of TB and larger. While confusing and incorrect, the customary convention is used by the Microsoft Windows operating system and random-access memory capacity, such as main memory and CPU cache size, and in marketing and billing by telecommunication companies, such as Vodafone, AT&T, Orange and Telstra. For storage capacity, the customary convention was used by macOS and iOS through Mac OS X 10.5 Leopard and iOS 10, after which they switched to units based on powers of 10. Parochial units Various computer vendors have coined terms for data of various sizes, sometimes with different sizes for the same term even within a single vendor. These terms include double word, half word, long word, quad word, slab, superword and syllable. There are also informal terms. e.g., half byte and nybble for 4 bits, octal K for . History of the conflicting definitions Contemporary computer memory has a binary architecture making a definition of memory units based on powers of 2 most practical. The use of the metric prefix kilo for binary multiples arose as a convenience, because is approximately . This definition was popular in early decades of personal computing, with products like the Tandon 5-inch DD floppy format (holding bytes) being advertised as "360 KB", following the -byte convention. It was not universal, however. The Shugart SA-400 5-inch floppy disk held 109,375 bytes unformatted, and was advertised as "110 Kbyte", using the 1000 convention. Likewise, the 8-inch DEC RX01 floppy (1975) held bytes formatted, and was advertised as "256k". Some devices were advertised using a mixture of the two definitions: most notably, floppy disks advertised as "1.44 MB" have an actual capacity of , the equivalent of 1.47 MB or 1.41 MiB. In 1995, the International Union of Pure and Applied Chemistry's (IUPAC) Interdivisional Committee on Nomenclature and Symbols attempted to resolve this ambiguity by proposing a set of binary prefixes for the powers of 1024, including kibi (kilobinary), mebi (megabinary), and gibi (gigabinary). In December 1998, the IEC addressed such multiple usages and definitions by adopting the IUPAC's proposed prefixes (kibi, mebi, gibi, etc.) to unambiguously denote powers of 1024. Thus one kibibyte (1 KiB) is 10241 bytes = 1024 bytes, one mebibyte (1 MiB) is 10242 bytes = bytes, and so on. In 1999, Donald Knuth suggested calling the kibibyte a "large kilobyte" (KKB). Modern standard definitions The IEC adopted the IUPAC proposal and published the standard in January 1999. The IEC prefixes are part of the International System of Quantities. The IEC further specified that the kilobyte should only be used to refer to bytes. Lawsuits over definition Lawsuits arising from alleged consumer confusion over the binary and decimal definitions of multiples of the byte have generally ended in favor of the manufacturers, with courts holding that the legal definition of gigabyte or GB is 1 GB = (109) bytes (the decimal definition), rather than the binary definition (230, i.e., ). Specifically, the United States District Court for the Northern District of California held that "the U.S. Congress has deemed the decimal definition of gigabyte to be the 'preferred' one for the purposes of 'U.S. trade and commerce' [...] The California Legislature has likewise adopted the decimal system for all 'transactions in this state. Earlier lawsuits had ended in settlement with no court ruling on the question, such as a lawsuit against drive manufacturer Western Digital. Western Digital settled the challenge and added explicit disclaimers to products that the usable capacity may differ from the advertised capacity. Seagate was sued on similar grounds and also settled. Practical examples Common uses Many programming languages define the data type byte. The C and C++ programming languages define byte as an "addressable unit of data storage large enough to hold any member of the basic character set of the execution environment" (clause 3.6 of the C standard). The C standard requires that the integral data type unsigned char must hold at least 256 different values, and is represented by at least eight bits (clause 5.2.4.2.1). Various implementations of C and C++ reserve 8, 9, 16, 32, or 36 bits for the storage of a byte. In addition, the C and C++ standards require that there be no gaps between two bytes. This means every bit in memory is part of a byte. Java's primitive data type byte is defined as eight bits. It is a signed data type, holding values from −128 to 127. .NET programming languages, such as C#, define byte as an unsigned type, and the sbyte as a signed data type, holding values from 0 to 255, and −128 to 127, respectively. In data transmission systems, the byte is used as a contiguous sequence of bits in a serial data stream, representing the smallest distinguished unit of data. For asynchronous communication a full transmission unit usually additionally includes a start bit, 1 or 2 stop bits, and possibly a parity bit, and thus its size may vary from seven to twelve bits for five to eight bits of actual data. For synchronous communication the error checking usually uses bytes at the end of a frame. See also Data Data hierarchy Nibble Octet (computing) Primitive data type Tryte Word (computer architecture) Notes References Further reading Ashley Taylor. "Bits and Bytes". Stanford. https://web.stanford.edu/class/cs101/bits-bytes.html Data types Units of information Binary arithmetic Computer memory Data unit Primitive types 1950s neologisms 8 (number)
Byte
[ "Mathematics" ]
3,501
[ "Quantity", "Arithmetic", "Units of information", "Binary arithmetic", "Units of measurement" ]
3,370
https://en.wikipedia.org/wiki/Boron%20nitride
Boron nitride is a thermally and chemically resistant refractory compound of boron and nitrogen with the chemical formula BN. It exists in various crystalline forms that are isoelectronic to a similarly structured carbon lattice. The hexagonal form corresponding to graphite is the most stable and soft among BN polymorphs, and is therefore used as a lubricant and an additive to cosmetic products. The cubic (zincblende aka sphalerite structure) variety analogous to diamond is called c-BN; it is softer than diamond, but its thermal and chemical stability is superior. The rare wurtzite BN modification is similar to lonsdaleite but slightly softer than the cubic form. Because of excellent thermal and chemical stability, boron nitride ceramics are used in high-temperature equipment and metal casting. Boron nitride has potential use in nanotechnology. History Boron nitride was discovered by chemistry teacher of the Liverpool Institute in 1842 via reduction of boric acid with charcoal in the presence of potassium cyanide. Structure Boron nitride exists in multiple forms that differ in the arrangement of the boron and nitrogen atoms, giving rise to varying bulk properties of the material. Amorphous form (a-BN) The amorphous form of boron nitride (a-BN) is non-crystalline, lacking any long-distance regularity in the arrangement of its atoms. It is analogous to amorphous carbon. All other forms of boron nitride are crystalline. Hexagonal form (h-BN) The most stable crystalline form is the hexagonal one, also called h-BN, α-BN, g-BN, graphitic boron nitride and "white graphene". Hexagonal boron nitride (point group = D3h; space group = P63/mmc) has a layered structure similar to graphite. Within each layer, boron and nitrogen atoms are bound by strong covalent bonds, whereas the layers are held together by weak van der Waals forces. The interlayer "registry" of these sheets differs, however, from the pattern seen for graphite, because the atoms are eclipsed, with boron atoms lying over and above nitrogen atoms. This registry reflects the local polarity of the B–N bonds, as well as interlayer N-donor/B-acceptor characteristics. Likewise, many metastable forms consisting of differently stacked polytypes exist. Therefore, h-BN and graphite are very close neighbors, and the material can accommodate carbon as a substituent element to form BNCs. BC6N hybrids have been synthesized, where carbon substitutes for some B and N atoms. Hexagonal boron nitride monolayer is analogous to graphene, having a honeycomb lattice structure of nearly the same dimensions. Unlike graphene, which is black and an electrical conductor, h-BN monolayer is white and an insulator. It has been proposed for use as an atomic flat insulating substrate or a tunneling dielectric barrier in 2D electronics. . Cubic form (c-BN) Cubic boron nitride has a crystal structure analogous to that of diamond. Consistent with diamond being less stable than graphite, the cubic form is less stable than the hexagonal form, but the conversion rate between the two is negligible at room temperature, as it is for diamond. The cubic form has the sphalerite crystal structure (space group = F3m), the same as that of diamond (with ordered B and N atoms), and is also called β-BN or c-BN. Wurtzite form (w-BN) The wurtzite form of boron nitride (w-BN; point group = C6v; space group = P63mc) has the same structure as lonsdaleite, a rare hexagonal polymorph of carbon. As in the cubic form, the boron and nitrogen atoms are grouped into tetrahedra. In the wurtzite form, the boron and nitrogen atoms are grouped into 6-membered rings. In the cubic form all rings are in the chair configuration, whereas in w-BN the rings between 'layers' are in boat configuration. Earlier optimistic reports predicted that the wurtzite form was very strong, and was estimated by a simulation as potentially having a strength 18% stronger than that of diamond. Since only small amounts of the mineral exist in nature, this has not yet been experimentally verified. Its hardness is 46 GPa, slightly harder than commercial borides but softer than the cubic form of boron nitride. Properties Physical The partly ionic structure of BN layers in h-BN reduces covalency and electrical conductivity, whereas the interlayer interaction increases resulting in higher hardness of h-BN relative to graphite. The reduced electron-delocalization in hexagonal-BN is also indicated by its absence of color and a large band gap. Very different bonding – strong covalent within the basal planes (planes where boron and nitrogen atoms are covalently bonded) and weak between them – causes high anisotropy of most properties of h-BN. For example, the hardness, electrical and thermal conductivity are much higher within the planes than perpendicular to them. On the contrary, the properties of c-BN and w-BN are more homogeneous and isotropic. Those materials are extremely hard, with the hardness of bulk c-BN being slightly smaller and w-BN even higher than that of diamond. Polycrystalline c-BN with grain sizes on the order of 10 nm is also reported to have Vickers hardness comparable or higher than diamond. Because of much better stability to heat and transition metals, c-BN surpasses diamond in mechanical applications, such as machining steel. The thermal conductivity of BN is among the highest of all electric insulators (see table). Boron nitride can be doped p-type with beryllium and n-type with boron, sulfur, silicon or if co-doped with carbon and nitrogen. Both hexagonal and cubic BN are wide-gap semiconductors with a band-gap energy corresponding to the UV region. If voltage is applied to h-BN or c-BN, then it emits UV light in the range 215–250 nm and therefore can potentially be used as light-emitting diodes (LEDs) or lasers. Little is known on melting behavior of boron nitride. It degrades at 2973 °C, but melts at elevated pressure. Thermal stability Hexagonal and cubic BN (and probably w-BN) show remarkable chemical and thermal stabilities. For example, h-BN is stable to decomposition at temperatures up to 1000 °C in air, 1400 °C in vacuum, and 2800 °C in an inert atmosphere. The reactivity of h-BN and c-BN is relatively similar, and the data for c-BN are summarized in the table below. Thermal stability of c-BN can be summarized as follows: In air or oxygen: protective layer prevents further oxidation to ~1300 °C; no conversion to hexagonal form at 1400 °C. In nitrogen: some conversion to h-BN at 1525 °C after 12 h. In vacuum (): conversion to h-BN at 1550–1600 °C. Chemical stability Boron nitride is not attacked by the usual acids, but it is soluble in alkaline molten salts and nitrides, such as LiOH, KOH, NaOH-, , , , , or , which are therefore used to etch BN. Thermal conductivity The theoretical thermal conductivity of hexagonal boron nitride nanoribbons (BNNRs) can approach 1700–2000 W/(m⋅K), which has the same order of magnitude as the experimental measured value for graphene, and can be comparable to the theoretical calculations for graphene nanoribbons. Moreover, the thermal transport in the BNNRs is anisotropic. The thermal conductivity of zigzag-edged BNNRs is about 20% larger than that of armchair-edged nanoribbons at room temperature. Mechanical properties BN nanosheets consist of hexagonal boron nitride (h-BN). They are stable up to 800°C in air. The structure of monolayer BN is similar to that of graphene, which has exceptional strength, a high-temperature lubricant, and a substrate in electronic devices. The anisotropy of Young's modulus and Poisson's ratio depends on the system size. h-BN also exhibits strongly anisotropic strength and toughness, and maintains these over a range of vacancy defects, showing that the anisotropy is independent to the defect type. Natural occurrence In 2009, cubic form (c-BN) was reported in Tibet, and the name qingsongite proposed. The substance was found in dispersed micron-sized inclusions in chromium-rich rocks. In 2013, the International Mineralogical Association affirmed the mineral and the name. Synthesis Preparation and reactivity of hexagonal BN Hexagonal boron nitride is obtained by the treating boron trioxide () or boric acid () with ammonia () or urea () in an inert atmosphere: (T = 900 °C) (T = 900 °C) (T > 1000 °C) (T > 1500 °C) The resulting disordered (amorphous) material contains 92–95% BN and 5–8% . The remaining can be evaporated in a second step at temperatures in order to achieve BN concentration >98%. Such annealing also crystallizes BN, the size of the crystallites increasing with the annealing temperature. h-BN parts can be fabricated inexpensively by hot-pressing with subsequent machining. The parts are made from boron nitride powders adding boron oxide for better compressibility. Thin films of boron nitride can be obtained by chemical vapor deposition from borazine. ZYP Coatings also has developed boron nitride coatings that may be painted on a surface. Combustion of boron powder in nitrogen plasma at 5500 °C yields ultrafine boron nitride used for lubricants and toners. Boron nitride reacts with iodine fluoride to give in low yield. Boron nitride reacts with nitrides of lithium, alkaline earth metals and lanthanides to form nitridoborates. For example: Intercalation of hexagonal BN Various species intercalate into hexagonal BN, such as intercalate or alkali metals. Preparation of cubic BN c-BN is prepared analogously to the preparation of synthetic diamond from graphite. Direct conversion of hexagonal boron nitride to the cubic form has been observed at pressures between 5 and 18 GPa and temperatures between 1730 and 3230 °C, that is similar parameters as for direct graphite-diamond conversion. The addition of a small amount of boron oxide can lower the required pressure to 4–7 GPa and temperature to 1500 °C. As in diamond synthesis, to further reduce the conversion pressures and temperatures, a catalyst is added, such as lithium, potassium, or magnesium, their nitrides, their fluoronitrides, water with ammonium compounds, or hydrazine. Other industrial synthesis methods, again borrowed from diamond growth, use crystal growth in a temperature gradient, or explosive shock wave. The shock wave method is used to produce material called heterodiamond, a superhard compound of boron, carbon, and nitrogen. Low-pressure deposition of thin films of cubic boron nitride is possible. As in diamond growth, the major problem is to suppress the growth of hexagonal phases (h-BN or graphite, respectively). Whereas in diamond growth this is achieved by adding hydrogen gas, boron trifluoride is used for c-BN. Ion beam deposition, plasma-enhanced chemical vapor deposition, pulsed laser deposition, reactive sputtering, and other physical vapor deposition methods are used as well. Preparation of wurtzite BN Wurtzite BN can be obtained via static high-pressure or dynamic shock methods. The limits of its stability are not well defined. Both c-BN and w-BN are formed by compressing h-BN, but formation of w-BN occurs at much lower temperatures close to 1700 °C. Production statistics Whereas the production and consumption figures for the raw materials used for BN synthesis, namely boric acid and boron trioxide, are well known (see boron), the corresponding numbers for the boron nitride are not listed in statistical reports. An estimate for the 1999 world production is 300 to 350 metric tons. The major producers and consumers of BN are located in the United States, Japan, China and Germany. In 2000, prices varied from about $75–120/kg for standard industrial-quality h-BN and were about up to $200–400/kg for high purity BN grades. Applications Hexagonal BN Hexagonal BN (h-BN) is the most widely used polymorph. It is a good lubricant at both low and high temperatures (up to 900 °C, even in an oxidizing atmosphere). h-BN lubricant is particularly useful when the electrical conductivity or chemical reactivity of graphite (alternative lubricant) would be problematic. In internal combustion engines, where graphite could be oxidized and turn into carbon sludge, h-BN with its superior thermal stability can be added to engine lubricants. As with all nano-particle suspensions, Brownian-motion settlement is a problem. Settlement can clog engine oil filters, which limits solid lubricant applications in a combustion engine to automotive racing, where engine re-building is common. Since carbon has appreciable solubility in certain alloys (such as steels), which may lead to degradation of properties, BN is often superior for high temperature and/or high pressure applications. Another advantage of h-BN over graphite is that its lubricity does not require water or gas molecules trapped between the layers. Therefore, h-BN lubricants can be used in vacuum, such as space applications. The lubricating properties of fine-grained h-BN are used in cosmetics, paints, dental cements, and pencil leads. Hexagonal BN was first used in cosmetics around 1940 in Japan. Because of its high price, h-BN was abandoned for this application. Its use was revitalized in the late 1990s with the optimization h-BN production processes, and currently h-BN is used by nearly all leading producers of cosmetic products for foundations, make-up, eye shadows, blushers, kohl pencils, lipsticks and other skincare products. Because of its excellent thermal and chemical stability, boron nitride ceramics and coatings are used high-temperature equipment. h-BN can be included in ceramics, alloys, resins, plastics, rubbers, and other materials, giving them self-lubricating properties. Such materials are suitable for construction of e.g. bearings and in steelmaking. Many quantum devices use multilayer h-BN as a substrate material. It can also be used as a dielectric in resistive random access memories. Hexagonal BN is used in xerographic process and laser printers as a charge leakage barrier layer of the photo drum. In the automotive industry, h-BN mixed with a binder (boron oxide) is used for sealing oxygen sensors, which provide feedback for adjusting fuel flow. The binder utilizes the unique temperature stability and insulating properties of h-BN. Parts can be made by hot pressing from four commercial grades of h-BN. Grade HBN contains a boron oxide binder; it is usable up to 550–850 °C in oxidizing atmosphere and up to 1600 °C in vacuum, but due to the boron oxide content is sensitive to water. Grade HBR uses a calcium borate binder and is usable at 1600 °C. Grades HBC and HBT contain no binder and can be used up to 3000 °C. Boron nitride nanosheets (h-BN) can be deposited by catalytic decomposition of borazine at a temperature ~1100 °C in a chemical vapor deposition setup, over areas up to about 10 cm2. Owing to their hexagonal atomic structure, small lattice mismatch with graphene (~2%), and high uniformity they are used as substrates for graphene-based devices. BN nanosheets are also excellent proton conductors. Their high proton transport rate, combined with the high electrical resistance, may lead to applications in fuel cells and water electrolysis. h-BN has been used since the mid-2000s as a bullet and bore lubricant in precision target rifle applications as an alternative to molybdenum disulfide coating, commonly referred to as "moly". It is claimed to increase effective barrel life, increase intervals between bore cleaning and decrease the deviation in point of impact between clean bore first shots and subsequent shots. h-BN is used as a release agent in molten metal and glass applications. For example, ZYP Coatings developed and currently produces a line of paintable h-BN coatings that are used by manufacturers of molten aluminium, non-ferrous metal, and glass. Because h-BN is nonwetting and lubricious to these molten materials, the coated surface (i.e. mold or crucible) does not stick to the material. Cubic BN Cubic boron nitride (CBN or c-BN) is widely used as an abrasive. Its usefulness arises from its insolubility in iron, nickel, and related alloys at high temperatures, whereas diamond is soluble in these metals. Polycrystalline c-BN (PCBN) abrasives are therefore used for machining steel, whereas diamond abrasives are preferred for aluminum alloys, ceramics, and stone. When in contact with oxygen at high temperatures, BN forms a passivation layer of boron oxide. Boron nitride binds well with metals due to formation of interlayers of metal borides or nitrides. Materials with cubic boron nitride crystals are often used in the tool bits of cutting tools. For grinding applications, softer binders such as resin, porous ceramics and soft metals are used. Ceramic binders can be used as well. Commercial products are known under names "Borazon" (by Hyperion Materials & Technologies), and "Elbor" or "Cubonite" (by Russian vendors). Contrary to diamond, large c-BN pellets can be produced in a simple process (called sintering) of annealing c-BN powders in nitrogen flow at temperatures slightly below the BN decomposition temperature. This ability of c-BN and h-BN powders to fuse allows cheap production of large BN parts. Similar to diamond, the combination in c-BN of highest thermal conductivity and electrical resistivity is ideal for heat spreaders. As cubic boron nitride consists of light atoms and is very robust chemically and mechanically, it is one of the popular materials for X-ray membranes: low mass results in small X-ray absorption, and good mechanical properties allow usage of thin membranes, further reducing the absorption. Amorphous BN Layers of amorphous boron nitride (a-BN) are used in some semiconductor devices, e.g. MOSFETs. They can be prepared by chemical decomposition of trichloroborazine with caesium, or by thermal chemical vapor deposition methods. Thermal CVD can be also used for deposition of h-BN layers, or at high temperatures, c-BN. Other forms of boron nitride Atomically thin boron nitride Hexagonal boron nitride can be exfoliated to mono or few atomic layer sheets. Due to its analogous structure to that of graphene, atomically thin boron nitride is sometimes called white graphene. Mechanical properties Atomically thin boron nitride is one of the strongest electrically insulating materials. Monolayer boron nitride has an average Young's modulus of 0.865TPa and fracture strength of 70.5GPa, and in contrast to graphene, whose strength decreases dramatically with increased thickness, few-layer boron nitride sheets have a strength similar to that of monolayer boron nitride. Thermal conductivity Atomically thin boron nitride has one of the highest thermal conductivity coefficients (751 W/mK at room temperature) among semiconductors and electrical insulators, and its thermal conductivity increases with reduced thickness due to less intra-layer coupling. Thermal stability The air stability of graphene shows a clear thickness dependence: monolayer graphene is reactive to oxygen at 250 °C, strongly doped at 300 °C, and etched at 450 °C; in contrast, bulk graphite is not oxidized until 800 °C. Atomically thin boron nitride has much better oxidation resistance than graphene. Monolayer boron nitride is not oxidized till 700 °C and can sustain up to 850 °C in air; bilayer and trilayer boron nitride nanosheets have slightly higher oxidation starting temperatures. The excellent thermal stability, high impermeability to gas and liquid, and electrical insulation make atomically thin boron nitride potential coating materials for preventing surface oxidation and corrosion of metals and other two-dimensional (2D) materials, such as black phosphorus. Better surface adsorption Atomically thin boron nitride has been found to have better surface adsorption capabilities than bulk hexagonal boron nitride. According to theoretical and experimental studies, atomically thin boron nitride as an adsorbent experiences conformational changes upon surface adsorption of molecules, increasing adsorption energy and efficiency. The synergic effect of the atomic thickness, high flexibility, stronger surface adsorption capability, electrical insulation, impermeability, high thermal and chemical stability of BN nanosheets can increase the Raman sensitivity by up to two orders, and in the meantime attain long-term stability and reusability not readily achievable by other materials. Dielectric properties Atomically thin hexagonal boron nitride is an excellent dielectric substrate for graphene, molybdenum disulfide (), and many other 2D material-based electronic and photonic devices. As shown by electric force microscopy (EFM) studies, the electric field screening in atomically thin boron nitride shows a weak dependence on thickness, which is in line with the smooth decay of electric field inside few-layer boron nitride revealed by the first-principles calculations. Raman characteristics Raman spectroscopy has been a useful tool to study a variety of 2D materials, and the Raman signature of high-quality atomically thin boron nitride was first reported by Gorbachev et al. in 2011. and Li et al. However, the two reported Raman results of monolayer boron nitride did not agree with each other. Cai et al., therefore, conducted systematic experimental and theoretical studies to reveal the intrinsic Raman spectrum of atomically thin boron nitride. It reveals that atomically thin boron nitride without interaction with a substrate has a G band frequency similar to that of bulk hexagonal boron nitride, but strain induced by the substrate can cause Raman shifts. Nevertheless, the Raman intensity of G band of atomically thin boron nitride can be used to estimate layer thickness and sample quality. Boron nitride nanomesh Boron nitride nanomesh is a nanostructured two-dimensional material. It consists of a single BN layer, which forms by self-assembly a highly regular mesh after high-temperature exposure of a clean rhodium or ruthenium surface to borazine under ultra-high vacuum. The nanomesh looks like an assembly of hexagonal pores. The distance between two pore centers is 3.2 nm and the pore diameter is ~2 nm. Other terms for this material are boronitrene or white graphene. The boron nitride nanomesh is air-stable and compatible with some liquids. up to temperatures of 800 °C. Boron nitride nanotubes Boron nitride tubules were first made in 1989 by Shore and Dolan This work was patented in 1989 and published in 1989 thesis (Dolan) and then 1993 Science. The 1989 work was also the first preparation of amorphous BN by B-trichloroborazine and cesium metal. Boron nitride nanotubes were predicted in 1994 and experimentally discovered in 1995. They can be imagined as a rolled up sheet of h-boron nitride. Structurally, it is a close analog of the carbon nanotube, namely a long cylinder with diameter of several to hundred nanometers and length of many micrometers, except carbon atoms are alternately substituted by nitrogen and boron atoms. However, the properties of BN nanotubes are very different: whereas carbon nanotubes can be metallic or semiconducting depending on the rolling direction and radius, a BN nanotube is an electrical insulator with a bandgap of ~5.5 eV, basically independent of tube chirality and morphology. In addition, a layered BN structure is much more thermally and chemically stable than a graphitic carbon structure. Boron nitride aerogel Boron nitride aerogel is an aerogel made of highly porous BN. It typically consists of a mixture of deformed BN nanotubes and nanosheets. It can have a density as low as 0.6 mg/cm3 and a specific surface area as high as 1050 m2/g, and therefore has potential applications as an absorbent, catalyst support and gas storage medium. BN aerogels are highly hydrophobic and can absorb up to 160 times their weight in oil. They are resistant to oxidation in air at temperatures up to 1200 °C, and hence can be reused after the absorbed oil is burned out by flame. BN aerogels can be prepared by template-assisted chemical vapor deposition using borazine as the feed gas. Composites containing BN Addition of boron nitride to silicon nitride ceramics improves the thermal shock resistance of the resulting material. For the same purpose, BN is added also to silicon nitride-alumina and titanium nitride-alumina ceramics. Other materials being reinforced with BN include alumina and zirconia, borosilicate glasses, glass ceramics, enamels, and composite ceramics with titanium boride-boron nitride, titanium boride-aluminium nitride-boron nitride, and silicon carbide-boron nitride composition. Zirconia Stabilized Boron Nitride (ZSBN) is produced by adding zirconia to BN, enhancing its thermal shock resistance and mechanical strength through a sintering process. It offers better performance characteristics including Superior corrosion and erosion resistance over a wide temperature range. Its unique combination of thermal conductivity, lubricity, mechanical strength, and stability makes it suitable for various applications including cutting tools and wear-resistant coatings, thermal and electrical insulation, aerospace and defense, and high-temperature components. Pyrolytic boron nitride (PBN) Pyrolytic boron nitride (PBN), also known as Chemical vapour-deposited Boron Nitride(CVD-BN), is a high-purity ceramic material characterized by exceptional chemical resistance and mechanical strength at high temperatures. Pyrolytic boron nitride is typically prepared through the thermal decomposition of boron trichloride and ammonia vapors on graphite substrates at 1900°C. Pyrolytic boron nitride (PBN) generally has a hexagonal structure similar to hexagonal boron nitride (hBN), though it can exhibit stacking faults or deviations from the ideal lattice. Pyrolytic boron nitride (PBN) shows some remarkable attributes, including exceptional chemical inertness, high dielectric strength, excellent thermal shock resistance, non-wettability, non-toxicity, oxidation resistance, and minimal outgassing. Due to a highly ordered planar texture similar to pyrolytic graphite (PG), it exhibits anisotropic properties such as lower dielectric constant vertical to the crystal plane and higher bending strength along the crystal plane. PBN material has been widely manufactured as crucibles of compound semiconductor crystals, output windows and dielectric rods of traveling-wave tubes, high-temperature jigs and insulator. Health issues Boron nitride (along with , NbN, and BNC) is generally considered to be non-toxic and does not exhibit chemical activity in biological systems. Due to its excellent safety profile and lubricious properties, boron nitride finds widespread use in various applications, including cosmetics and food processing equipment. See also Beta carbon nitride Borazon Borocarbonitrides Boron suboxide Superhard materials Wide-bandgap semiconductors Notes References External links National Pollutant Inventory: Boron and Compounds Materials Safety Data Sheet at University of Oxford Boron compounds Ceramic materials Nitrides III-V semiconductors Non-petroleum based lubricants Dry lubricants Abrasives Superhard materials Neutron poisons Monolayers III-V compounds Boron–nitrogen compounds Zincblende crystal structure Wurtzite structure type
Boron nitride
[ "Physics", "Chemistry", "Engineering" ]
6,224
[ "Monolayers", "Inorganic compounds", "Semiconductor materials", "Materials", "Superhard materials", "Ceramic materials", "III-V semiconductors", "Ceramic engineering", "III-V compounds", "Atoms", "Matter" ]
3,378
https://en.wikipedia.org/wiki/Beryllium
Beryllium is a chemical element; it has symbol Be and atomic number 4. It is a steel-gray, hard, strong, lightweight and brittle alkaline earth metal. It is a divalent element that occurs naturally only in combination with other elements to form minerals. Gemstones high in beryllium include beryl (aquamarine, emerald, red beryl) and chrysoberyl. It is a relatively rare element in the universe, usually occurring as a product of the spallation of larger atomic nuclei that have collided with cosmic rays. Within the cores of stars, beryllium is depleted as it is fused into heavier elements. Beryllium constitutes about 0.0004 percent by mass of Earth's crust. The world's annual beryllium production of 220 tons is usually manufactured by extraction from the mineral beryl, a difficult process because beryllium bonds strongly to oxygen. In structural applications, the combination of high flexural rigidity, thermal stability, thermal conductivity and low density (1.85 times that of water) make beryllium a desirable aerospace material for aircraft components, missiles, spacecraft, and satellites. Because of its low density and atomic mass, beryllium is relatively transparent to X-rays and other forms of ionizing radiation; therefore, it is the most common window material for X-ray equipment and components of particle detectors. When added as an alloying element to aluminium, copper (notably the alloy beryllium copper), iron, or nickel, beryllium improves many physical properties. For example, tools and components made of beryllium copper alloys are strong and hard and do not create sparks when they strike a steel surface. In air, the surface of beryllium oxidizes readily at room temperature to form a passivation layer 1–10 nm thick that protects it from further oxidation and corrosion. The metal oxidizes in bulk (beyond the passivation layer) when heated above , and burns brilliantly when heated to about . The commercial use of beryllium requires the use of appropriate dust control equipment and industrial controls at all times because of the toxicity of inhaled beryllium-containing dusts that can cause a chronic life-threatening allergic disease, berylliosis, in some people. Berylliosis is typically manifested by chronic pulmonary fibrosis and, in severe cases, right sided heart failure and death. Characteristics Physical properties Beryllium is a steel gray and hard metal that is brittle at room temperature and has a close-packed hexagonal crystal structure. It has exceptional stiffness (Young's modulus 287 GPa) and a melting point of 1287 °C. The modulus of elasticity of beryllium is approximately 35% greater than that of steel. The combination of this modulus and a relatively low density results in an unusually fast sound conduction speed in beryllium – about 12.9 km/s at ambient conditions. Other significant properties are high specific heat () and thermal conductivity (), which make beryllium the metal with the best heat dissipation characteristics per unit weight. In combination with the relatively low coefficient of linear thermal expansion (11.4 × 10−6 K−1), these characteristics result in a unique stability under conditions of thermal loading. Nuclear properties Naturally occurring beryllium, save for slight contamination by the cosmogenic radioisotopes, is isotopically pure beryllium-9, which has a nuclear spin of . Beryllium has a large scattering cross section for high-energy neutrons, about 6 barns for energies above approximately 10 keV. Therefore, it works as a neutron reflector and neutron moderator, effectively slowing the neutrons to the thermal energy range of below 0.03 eV, where the total cross section is at least an order of magnitude lower; the exact value strongly depends on the purity and size of the crystallites in the material. The single primordial beryllium isotope 9Be also undergoes a (n,2n) neutron reaction with neutron energies over about 1.9 MeV, to produce 8Be, which almost immediately breaks into two alpha particles. Thus, for high-energy neutrons, beryllium is a neutron multiplier, releasing more neutrons than it absorbs. This nuclear reaction is: + n → 2 + 2 n Neutrons are liberated when beryllium nuclei are struck by energetic alpha particles producing the nuclear reaction + → + n where is an alpha particle and is a carbon-12 nucleus. Beryllium also releases neutrons under bombardment by gamma rays. Thus, natural beryllium bombarded either by alphas or gammas from a suitable radioisotope is a key component of most radioisotope-powered nuclear reaction neutron sources for the laboratory production of free neutrons. Small amounts of tritium are liberated when nuclei absorb low energy neutrons in the three-step nuclear reaction + n → + ,    → + β−,    + n → + has a half-life of only 0.8 seconds, β− is an electron, and has a high neutron absorption cross section. Tritium is a radioisotope of concern in nuclear reactor waste streams. Optical properties As a metal, beryllium is transparent or translucent to most wavelengths of X-rays and gamma rays, making it useful for the output windows of X-ray tubes and other such apparatus. Isotopes and nucleosynthesis Both stable and unstable isotopes of beryllium are created in stars, but the radioisotopes do not last long. It is believed that most of the stable beryllium in the universe was originally created in the interstellar medium when cosmic rays induced fission in heavier elements found in interstellar gas and dust. Primordial beryllium contains only one stable isotope, 9Be, and therefore beryllium is, uniquely among all stable elements with an even atomic number, a monoisotopic and mononuclidic element. Radioactive cosmogenic 10Be is produced in the atmosphere of the Earth by the cosmic ray spallation of oxygen. 10Be accumulates at the soil surface, where its relatively long half-life (1.36 million years) permits a long residence time before decaying to boron-10. Thus, 10Be and its daughter products are used to examine natural soil erosion, soil formation and the development of lateritic soils, and as a proxy for measurement of the variations in solar activity and the age of ice cores. The production of 10Be is inversely proportional to solar activity, because increased solar wind during periods of high solar activity decreases the flux of galactic cosmic rays that reach the Earth. Nuclear explosions also form 10Be by the reaction of fast neutrons with 13C in the carbon dioxide in air. This is one of the indicators of past activity at nuclear weapon test sites. The isotope 7Be (half-life 53 days) is also cosmogenic, and shows an atmospheric abundance linked to sunspots, much like 10Be. 8Be has a very short half-life of about 8 s that contributes to its significant cosmological role, as elements heavier than beryllium could not have been produced by nuclear fusion in the Big Bang. This is due to the lack of sufficient time during the Big Bang's nucleosynthesis phase to produce carbon by the fusion of 4He nuclei and the very low concentrations of available beryllium-8. British astronomer Sir Fred Hoyle first showed that the energy levels of 8Be and 12C allow carbon production by the so-called triple-alpha process in helium-fueled stars where more nucleosynthesis time is available. This process allows carbon to be produced in stars, but not in the Big Bang. Star-created carbon (the basis of carbon-based life) is thus a component in the elements in the gas and dust ejected by AGB stars and supernovae (see also Big Bang nucleosynthesis), as well as the creation of all other elements with atomic numbers larger than that of carbon. The 2s electrons of beryllium may contribute to chemical bonding. Therefore, when 7Be decays by L-electron capture, it does so by taking electrons from its atomic orbitals that may be participating in bonding. This makes its decay rate dependent to a measurable degree upon its chemical surroundings – a rare occurrence in nuclear decay. The shortest-lived known isotope of beryllium is 16Be, which decays through neutron emission with a half-life of . The exotic isotopes 11Be and 14Be are known to exhibit a nuclear halo. This phenomenon can be understood as the nuclei of 11Be and 14Be have, respectively, 1 and 4 neutrons orbiting substantially outside the classical Fermi 'waterdrop' model of the nucleus. Occurrence The Sun has a concentration of 0.1 parts per billion (ppb) of beryllium. Beryllium has a concentration of 2 to 6 parts per million (ppm) in the Earth's crust and is the 47th most abundant element. It is most concentrated in the soils at 6 ppm. Trace amounts of 9Be are found in the Earth's atmosphere. The concentration of beryllium in sea water is 0.2–0.6 parts per trillion. In stream water, however, beryllium is more abundant with a concentration of 0.1 ppb. Beryllium is found in over 100 minerals, but most are uncommon to rare. The more common beryllium containing minerals include: bertrandite (Be4Si2O7(OH)2), beryl (Al2Be3Si6O18), chrysoberyl (Al2BeO4) and phenakite (Be2SiO4). Precious forms of beryl are aquamarine, red beryl and emerald. The green color in gem-quality forms of beryl comes from varying amounts of chromium (about 2% for emerald). The two main ores of beryllium, beryl and bertrandite, are found in Argentina, Brazil, India, Madagascar, Russia and the United States. Total world reserves of beryllium ore are greater than 400,000 tonnes. Production The extraction of beryllium from its compounds is a difficult process due to its high affinity for oxygen at elevated temperatures, and its ability to reduce water when its oxide film is removed. Currently the United States, China and Kazakhstan are the only three countries involved in the industrial-scale extraction of beryllium. Kazakhstan produces beryllium from a concentrate stockpiled before the breakup of the Soviet Union around 1991. This resource had become nearly depleted by mid-2010s. Production of beryllium in Russia was halted in 1997, and is planned to be resumed in the 2020s. Beryllium is most commonly extracted from the mineral beryl, which is either sintered using an extraction agent or melted into a soluble mixture. The sintering process involves mixing beryl with sodium fluorosilicate and soda at to form sodium fluoroberyllate, aluminium oxide and silicon dioxide. Beryllium hydroxide is precipitated from a solution of sodium fluoroberyllate and sodium hydroxide in water. The extraction of beryllium using the melt method involves grinding beryl into a powder and heating it to . The melt is quickly cooled with water and then reheated in concentrated sulfuric acid, mostly yielding beryllium sulfate and aluminium sulfate. Aqueous ammonia is then used to remove the aluminium and sulfur, leaving beryllium hydroxide. Beryllium hydroxide created using either the sinter or melt method is then converted into beryllium fluoride or beryllium chloride. To form the fluoride, aqueous ammonium hydrogen fluoride is added to beryllium hydroxide to yield a precipitate of ammonium tetrafluoroberyllate, which is heated to to form beryllium fluoride. Heating the fluoride to with magnesium forms finely divided beryllium, and additional heating to creates the compact metal. Heating beryllium hydroxide forms beryllium oxide, which becomes beryllium chloride when combined with carbon and chlorine. Electrolysis of molten beryllium chloride is then used to obtain the metal. Chemical properties Beryllium has a high electronegativity compared to other group 2 elements; thus C-Be bonds are less highly polarized than other C-MII bonds, although the attached carbon still bears a negative dipole moment. A beryllium atom has the electronic configuration [He] 2s2. The predominant oxidation state of beryllium is +2; the beryllium atom has lost both of its valence electrons. Lower oxidation states complexes of beryllium are exceedingly rare. For example, bis(carbene) compounds proposed to contain beryllium in the 0 and +1 oxidation state have been reported, although these claims have proved controversial. A stable complex with a Be-Be bond, which formally features beryllium in the +1 oxidation state, has been described. Beryllium's chemical behavior is largely a result of its small atomic and ionic radii. It thus has very high ionization potentials and strong polarization while bonded to other atoms, which is why all of its compounds are covalent. Its chemistry has similarities to that of aluminium, an example of a diagonal relationship. At room temperature, the surface of beryllium forms a 1−10 nm-thick oxide passivation layer that prevents further reactions with air, except for gradual thickening of the oxide up to about 25 nm. When heated above about 500 °C, oxidation into the bulk metal progresses along grain boundaries. Once the metal is ignited in air by heating above the oxide melting point around 2500 °C, beryllium burns brilliantly, forming a mixture of beryllium oxide and beryllium nitride. Beryllium dissolves readily in non-oxidizing acids, such as HCl and diluted H2SO4, but not in nitric acid or water as this forms the oxide. This behavior is similar to that of aluminium. Beryllium also dissolves in alkali solutions. Binary compounds of beryllium(II) are polymeric in the solid state. BeF2 has a silica-like structure with corner-shared BeF4 tetrahedra. BeCl2 and BeBr2 have chain structures with edge-shared tetrahedra. Beryllium oxide, BeO, is a white refractory solid which has a wurtzite crystal structure and a thermal conductivity as high as some metals. BeO is amphoteric. Beryllium sulfide, selenide and telluride are known, all having the zincblende structure. Beryllium nitride, Be3N2, is a high-melting-point compound which is readily hydrolyzed. Beryllium azide, BeN6 is known and beryllium phosphide, Be3P2 has a similar structure to Be3N2. A number of beryllium borides are known, such as Be5B, Be4B, Be2B, BeB2, BeB6 and BeB12. Beryllium carbide, Be2C, is a refractory brick-red compound that reacts with water to give methane. No beryllium silicide has been identified. The halides BeX2 (X = F, Cl, Br, and I) have a linear monomeric molecular structure in the gas phase. Complexes of the halides are formed with one or more ligands donating a total of two pairs of electrons. Such compounds obey the octet rule. Other 4-coordinate complexes, such as the aqua-ion [Be(H2O)4]2+ also obey the octet rule. Aqueous solutions Solutions of beryllium salts, such as beryllium sulfate and beryllium nitrate, are acidic because of hydrolysis of the [Be(H2O)4]2+ ion. The concentration of the first hydrolysis product, [Be(H2O)3(OH)]+, is less than 1% of the beryllium concentration. The most stable hydrolysis product is the trimeric ion [Be3(OH)3(H2O)6]3+. Beryllium hydroxide, Be(OH)2, is insoluble in water at pH 5 or more. Consequently, beryllium compounds are generally insoluble at biological pH. Because of this, inhalation of beryllium metal dust leads to the development of the fatal condition of berylliosis. Be(OH)2 dissolves in strongly alkaline solutions. Beryllium(II) forms few complexes with monodentate ligands because the water molecules in the aquo-ion, [Be(H2O)4]2+ are bound very strongly to the beryllium ion. Notable exceptions are the series of water-soluble complexes with the fluoride ion: [Be(H2O)4]^2+{} + \mathit{n}\,F^- <=> Be[(H2O)_{2\!-\mathit{n}}F_\mathit{n}]^{2\!-\mathit{n}}{} + \mathit{n}\,H2O Beryllium(II) forms many complexes with bidentate ligands containing oxygen-donor atoms. The species [Be3O(H2PO4)6]2- is notable for having a 3-coordinate oxide ion at its center. Basic beryllium acetate, Be4O(OAc)6, has an oxide ion surrounded by a tetrahedron of beryllium atoms. With organic ligands, such as the malonate ion, the acid deprotonates when forming the complex. The donor atoms are two oxygens. H2A + [Be(H2O)4]^2+ <=> [BeA(H2O)2] + 2H+ + 2H2O H2A + [BeA(H2O)2] <=> [BeA2]^2- + 2H+ + 2H2O The formation of a complex is in competition with the metal ion-hydrolysis reaction and mixed complexes with both the anion and the hydroxide ion are also formed. For example, derivatives of the cyclic trimer are known, with a bidentate ligand replacing one or more pairs of water molecules. Aliphatic hydroxycarboxylic acids such as glycolic acid form rather weak monodentate complexes in solution, in which the hydroxyl group remains intact. In the solid state, the hydroxyl group may deprotonate: a hexamer, Na_4[Be_6(OCH_2(O)O)_6] , was isolated long ago. Aromatic hydroxy ligands (i.e. phenols) form relatively strong complexes. For example, log K1 and log K2 values of 12.2 and 9.3 have been reported for complexes with tiron. Beryllium has generally a rather poor affinity for ammine ligands. Ligands such as EDTA behave as dicarboxylic acids. There are many early reports of complexes with amino acids, but unfortunately they are not reliable as the concomitant hydrolysis reactions were not understood at the time of publication. Values for log β of ca. 6 to 7 have been reported. The degree of formation is small because of competition with hydrolysis reactions. Organic chemistry Organoberyllium chemistry is limited to academic research due to the cost and toxicity of beryllium, beryllium derivatives and reagents required for the introduction of beryllium, such as beryllium chloride. Organometallic beryllium compounds are known to be highly reactive. Examples of known organoberyllium compounds are dineopentylberyllium, beryllocene (Cp2Be), diallylberyllium (by exchange reaction of diethyl beryllium with triallyl boron), bis(1,3-trimethylsilylallyl)beryllium, Be(mes)2, and (beryllium(I) complex) diberyllocene. Ligands can also be aryls and alkynyls. History The mineral beryl, which contains beryllium, has been used at least since the Ptolemaic dynasty of Egypt. In the first century CE, Roman naturalist Pliny the Elder mentioned in his encyclopedia Natural History that beryl and emerald ("smaragdus") were similar. The Papyrus Graecus Holmiensis, written in the third or fourth century CE, contains notes on how to prepare artificial emerald and beryl. Early analyses of emeralds and beryls by Martin Heinrich Klaproth, Torbern Olof Bergman, Franz Karl Achard, and always yielded similar elements, leading to the mistaken conclusion that both substances are aluminium silicates. Mineralogist René Just Haüy discovered that both crystals are geometrically identical, and he asked chemist Louis-Nicolas Vauquelin for a chemical analysis. In a 1798 paper read before the Institut de France, Vauquelin reported that he found a new "earth" by dissolving aluminium hydroxide from emerald and beryl in an additional alkali. The editors of the journal Annales de chimie et de physique named the new earth "glucine" for the sweet taste of some of its compounds. Klaproth preferred the name "beryllina" due to the fact that yttria also formed sweet salts. The name beryllium was first used by Friedrich Wöhler in 1828. Friedrich Wöhler and Antoine Bussy independently isolated beryllium in 1828 by the chemical reaction of metallic potassium with beryllium chloride, as follows: BeCl2 + 2 K → 2 KCl + Be Using an alcohol lamp, Wöhler heated alternating layers of beryllium chloride and potassium in a wired-shut platinum crucible. The above reaction immediately took place and caused the crucible to become white hot. Upon cooling and washing the resulting gray-black powder, he saw that it was made of fine particles with a dark metallic luster. The highly reactive potassium had been produced by the electrolysis of its compounds, a process discovered 21 years earlier. The chemical method using potassium yielded only small grains of beryllium from which no ingot of metal could be cast or hammered. The direct electrolysis of a molten mixture of beryllium fluoride and sodium fluoride by Paul Lebeau in 1898 resulted in the first pure (99.5 to 99.8%) samples of beryllium. However, industrial production started only after the First World War. The original industrial involvement included subsidiaries and scientists related to the Union Carbide and Carbon Corporation in Cleveland, Ohio, and Siemens & Halske AG in Berlin. In the US, the process was ruled by Hugh S. Cooper, director of The Kemet Laboratories Company. In Germany, the first commercially successful process for producing beryllium was developed in 1921 by Alfred Stock and Hans Goldschmidt. A sample of beryllium was bombarded with alpha rays from the decay of radium in a 1932 experiment by James Chadwick that uncovered the existence of the neutron. This same method is used in one class of radioisotope-based laboratory neutron sources that produce 30 neutrons for every million α particles. Beryllium production saw a rapid increase during World War II due to the rising demand for hard beryllium-copper alloys and phosphors for fluorescent lights. Most early fluorescent lamps used zinc orthosilicate with varying content of beryllium to emit greenish light. Small additions of magnesium tungstate improved the blue part of the spectrum to yield an acceptable white light. Halophosphate-based phosphors replaced beryllium-based phosphors after beryllium was found to be toxic. Electrolysis of a mixture of beryllium fluoride and sodium fluoride was used to isolate beryllium during the 19th century. The metal's high melting point makes this process more energy-consuming than corresponding processes used for the alkali metals. Early in the 20th century, the production of beryllium by the thermal decomposition of beryllium iodide was investigated following the success of a similar process for the production of zirconium, but this process proved to be uneconomical for volume production. Pure beryllium metal did not become readily available until 1957, even though it had been used as an alloying metal to harden and toughen copper much earlier. Beryllium could be produced by reducing beryllium compounds such as beryllium chloride with metallic potassium or sodium. Currently, most beryllium is produced by reducing beryllium fluoride with magnesium. The price on the American market for vacuum-cast beryllium ingots was about $338 per pound ($745 per kilogram) in 2001. Between 1998 and 2008, the world's production of beryllium had decreased from 343 to about 200 tonnes. It then increased to 230 metric tons by 2018, of which 170 tonnes came from the United States. Etymology Beryllium was named for the semiprecious mineral beryl, from which it was first isolated. The name beryllium was introduced by Wöhler in 1828. Although Humphry Davy failed to isolate it, he proposed the name glucium for the new metal, derived from the name glucina for the earth it was found in; altered forms of this name, glucinium or glucinum (symbol Gl) continued to be used into the 20th century. Both beryllium and glucinum were used concurrently until 1949, when the IUPAC adopted beryllium as the standard name of the element. Applications Radiation windows Because of its low atomic number and very low absorption for X-rays, the oldest and still one of the most important applications of beryllium is in radiation windows for X-ray tubes. Extreme demands are placed on purity and cleanliness of beryllium to avoid artifacts in the X-ray images. Thin beryllium foils are used as radiation windows for X-ray detectors, and their extremely low absorption minimizes the heating effects caused by high-intensity, low energy X-rays typical of synchrotron radiation. Vacuum-tight windows and beam-tubes for radiation experiments on synchrotrons are manufactured exclusively from beryllium. In scientific setups for various X-ray emission studies (e.g., energy-dispersive X-ray spectroscopy) the sample holder is usually made of beryllium because its emitted X-rays have much lower energies (≈100 eV) than X-rays from most studied materials. Low atomic number also makes beryllium relatively transparent to energetic particles. Therefore, it is used to build the beam pipe around the collision region in particle physics setups, such as all four main detector experiments at the Large Hadron Collider (ALICE, ATLAS, CMS, LHCb), the Tevatron and at SLAC. The low density of beryllium allows collision products to reach the surrounding detectors without significant interaction, its stiffness allows a powerful vacuum to be produced within the pipe to minimize interaction with gases, its thermal stability allows it to function correctly at temperatures of only a few degrees above absolute zero, and its diamagnetic nature keeps it from interfering with the complex multipole magnet systems used to steer and focus the particle beams. Mechanical applications Because of its stiffness, light weight and dimensional stability over a wide temperature range, beryllium metal is used for lightweight structural components in the defense and aerospace industries in high-speed aircraft, guided missiles, spacecraft, and satellites, including the James Webb Space Telescope. Several liquid-fuel rockets have used rocket nozzles made of pure beryllium. Beryllium powder was itself studied as a rocket fuel, but this use has never materialized. A small number of extreme high-end bicycle frames have been built with beryllium. From 1998 to 2000, the McLaren Formula One team used Mercedes-Benz engines with beryllium-aluminium alloy pistons. The use of beryllium engine components was banned following a protest by Scuderia Ferrari. Mixing about 2.0% beryllium into copper forms an alloy called beryllium copper that is six times stronger than copper alone. Beryllium alloys are used in many applications because of their combination of elasticity, high electrical conductivity and thermal conductivity, high strength and hardness, nonmagnetic properties, as well as good corrosion and fatigue resistance. These applications include non-sparking tools that are used near flammable gases (beryllium nickel), springs, membranes (beryllium nickel and beryllium iron) used in surgical instruments, and high temperature devices. As little as 50 parts per million of beryllium alloyed with liquid magnesium leads to a significant increase in oxidation resistance and decrease in flammability. The high elastic stiffness of beryllium has led to its extensive use in precision instrumentation, e.g. in inertial guidance systems and in the support mechanisms for optical systems. Beryllium-copper alloys were also applied as a hardening agent in "Jason pistols", which were used to strip the paint from the hulls of ships. In sound amplification systems, the speed at which sound travels directly affects the resonant frequency of the amplifier, thereby influencing the range of audible high-frequency sounds. Beryllium stands out due to its exceptionally high speed of sound propagation compared to other metals. This unique property allows beryllium to achieve higher resonant frequencies, making it an ideal material for use as a diaphragm in high-quality loudspeakers. Beryllium was used for cantilevers in high-performance phonograph cartridge styli, where its extreme stiffness and low density allowed for tracking weights to be reduced to 1 gram while still tracking high frequency passages with minimal distortion. An earlier major application of beryllium was in brakes for military airplanes because of its hardness, high melting point, and exceptional ability to dissipate heat. Environmental considerations have led to substitution by other materials. To reduce costs, beryllium can be alloyed with significant amounts of aluminium, resulting in the AlBeMet alloy (a trade name). This blend is cheaper than pure beryllium, while still retaining many desirable properties. Mirrors Beryllium mirrors are of particular interest. Large-area mirrors, frequently with a honeycomb support structure, are used, for example, in meteorological satellites where low weight and long-term dimensional stability are critical. Smaller beryllium mirrors are used in optical guidance systems and in fire-control systems, e.g. in the German-made Leopard 1 and Leopard 2 main battle tanks. In these systems, very rapid movement of the mirror is required, which again dictates low mass and high rigidity. Usually the beryllium mirror is coated with hard electroless nickel plating which can be more easily polished to a finer optical finish than beryllium. In some applications, the beryllium blank is polished without any coating. This is particularly applicable to cryogenic operation where thermal expansion mismatch can cause the coating to buckle. The James Webb Space Telescope has 18 hexagonal beryllium sections for its mirrors, each plated with a thin layer of gold. Because JWST will face a temperature of 33 K, the mirror is made of gold-plated beryllium, which is capable of handling extreme cold better than glass. Beryllium contracts and deforms less than glass and remains more uniform in such temperatures. For the same reason, the optics of the Spitzer Space Telescope are entirely built of beryllium metal. Magnetic applications Beryllium is non-magnetic. Therefore, tools fabricated out of beryllium-based materials are used by naval or military explosive ordnance disposal teams for work on or near naval mines, since these mines commonly have magnetic fuzes. They are also found in maintenance and construction materials near magnetic resonance imaging (MRI) machines because of the high magnetic fields generated. In the fields of radio communications and powerful (usually military) radars, hand tools made of beryllium are used to tune the highly magnetic klystrons, magnetrons, traveling wave tubes, etc., that are used for generating high levels of microwave power in the transmitters. Nuclear applications Thin plates or foils of beryllium are sometimes used in nuclear weapon designs as the very outer layer of the plutonium pits in the primary stages of thermonuclear bombs, placed to surround the fissile material. These layers of beryllium are good "pushers" for the implosion of the plutonium-239, and they are good neutron reflectors, just as in beryllium-moderated nuclear reactors. Beryllium is commonly used in some neutron sources in laboratory devices in which relatively few neutrons are needed (rather than having to use a nuclear reactor or a particle accelerator-powered neutron generator). For this purpose, a target of beryllium-9 is bombarded with energetic alpha particles from a radioisotope such as polonium-210, radium-226, plutonium-238, or americium-241. In the nuclear reaction that occurs, a beryllium nucleus is transmuted into carbon-12, and one free neutron is emitted, traveling in about the same direction as the alpha particle was heading. Such alpha decay-driven beryllium neutron sources, named "urchin" neutron initiators, were used in some early atomic bombs. Neutron sources in which beryllium is bombarded with gamma rays from a gamma decay radioisotope, are also used to produce laboratory neutrons. Beryllium is used in fuel fabrication for CANDU reactors. The fuel elements have small appendages that are resistance brazed to the fuel cladding using an induction brazing process with Be as the braze filler material. Bearing pads are brazed in place to prevent contact between the fuel bundle and the pressure tube containing it, and inter-element spacer pads are brazed on to prevent element to element contact. Beryllium is used at the Joint European Torus nuclear-fusion research laboratory, and it will be used in the more advanced ITER to condition the components which face the plasma. Beryllium has been proposed as a cladding material for nuclear fuel rods, because of its good combination of mechanical, chemical, and nuclear properties. Beryllium fluoride is one of the constituent salts of the eutectic salt mixture FLiBe, which is used as a solvent, moderator and coolant in many hypothetical molten salt reactor designs, including the liquid fluoride thorium reactor (LFTR). Acoustics The low weight and high rigidity of beryllium make it useful as a material for high-frequency speaker drivers. Because beryllium is expensive (many times more than titanium), hard to shape due to its brittleness, and toxic if mishandled, beryllium tweeters are limited to high-end home, pro audio, and public address applications. Some high-fidelity products have been fraudulently claimed to be made of the material. Some high-end phonograph cartridges used beryllium cantilevers to improve tracking by reducing mass. Electronic Beryllium is a p-type dopant in III-V compound semiconductors. It is widely used in materials such as GaAs, AlGaAs, InGaAs and InAlAs grown by molecular beam epitaxy (MBE). Cross-rolled beryllium sheet is an excellent structural support for printed circuit boards in surface-mount technology. In critical electronic applications, beryllium is both a structural support and heat sink. The application also requires a coefficient of thermal expansion that is well matched to the alumina and polyimide-glass substrates. The beryllium-beryllium oxide composite "E-Materials" have been specially designed for these electronic applications and have the additional advantage that the thermal expansion coefficient can be tailored to match diverse substrate materials. Beryllium oxide is useful for many applications that require the combined properties of an electrical insulator and an excellent heat conductor, with high strength and hardness and a very high melting point. Beryllium oxide is frequently used as an insulator base plate in high-power transistors in radio frequency transmitters for telecommunications. Beryllium oxide is being studied for use in increasing the thermal conductivity of uranium dioxide nuclear fuel pellets. Beryllium compounds were used in fluorescent lighting tubes, but this use was discontinued because of the disease berylliosis which developed in the workers who were making the tubes. Medical applications Beryllium is a component of several dental alloys. Beryllium is used in X-ray windows because it is transparent to X-rays, allowing for clearer and more efficient imaging. In medical imaging equipment, such as CT scanners and mammography machines, beryllium's strength and light weight enhance durability and performance. Beryllium is used in analytical equipment for blood, HIV, and other diseases. Beryllium alloys are used in surgical instruments, optical mirrors, and laser systems for medical treatments. Toxicity and safety Biological effects Approximately 35 micrograms of beryllium is found in the average human body, an amount not considered harmful. Beryllium is chemically similar to magnesium and therefore can displace it from enzymes, which causes them to malfunction. Because Be2+ is a highly charged and small ion, it can easily get into many tissues and cells, where it specifically targets cell nuclei, inhibiting many enzymes, including those used for synthesizing DNA. Its toxicity is exacerbated by the fact that the body has no means to control beryllium levels, and once inside the body, beryllium cannot be removed. Inhalation Chronic beryllium disease (CBD), or berylliosis, is a pulmonary and systemic granulomatous disease caused by inhalation of dust or fumes contaminated with beryllium; either large amounts over a short time or small amounts over a long time can lead to this ailment. Symptoms of the disease can take up to five years to develop; about a third of patients with it die and the survivors are left disabled. The International Agency for Research on Cancer (IARC) lists beryllium and beryllium compounds as Category 1 carcinogens. Occupational exposure In the US, the Occupational Safety and Health Administration (OSHA) has designated a permissible exposure limit (PEL) for beryllium and beryllium compounds of 0.2 μg/m3 as an 8-hour time-weighted average (TWA) and 2.0 μg/m3 as a short-term exposure limit over a sampling period of 15 minutes. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit (REL) upper-bound threshold of 0.5 μg/m3. The IDLH (immediately dangerous to life and health) value is 4 mg/m3. The toxicity of beryllium is on par with other toxic metalloids/metals, such as arsenic and mercury. Exposure to beryllium in the workplace can lead to a sensitized immune response, and over time development of berylliosis. NIOSH in the United States researches these effects in collaboration with a major manufacturer of beryllium products. NIOSH also conducts genetic research on sensitization and CBD, independently of this collaboration. Acute beryllium disease in the form of chemical pneumonitis was first reported in Europe in 1933 and in the United States in 1943. A survey found that about 5% of workers in plants manufacturing fluorescent lamps in 1949 in the United States had beryllium-related lung diseases. Chronic berylliosis resembles sarcoidosis in many respects, and the differential diagnosis is often difficult. It killed some early workers in nuclear weapons design, such as Herbert L. Anderson. Beryllium may be found in coal slag. When the slag is formulated into an abrasive agent for blasting paint and rust from hard surfaces, the beryllium can become airborne and become a source of exposure. Although the use of beryllium compounds in fluorescent lighting tubes was discontinued in 1949, potential for exposure to beryllium exists in the nuclear and aerospace industries, in the refining of beryllium metal and the melting of beryllium-containing alloys, in the manufacturing of electronic devices, and in the handling of other beryllium-containing material. Detection Early researchers undertook the highly hazardous practice of identifying beryllium and its various compounds from its sweet taste. A modern test for beryllium in air and on surfaces has been developed and published as an international voluntary consensus standard, ASTM D7202. The procedure uses dilute ammonium bifluoride for dissolution and fluorescence detection with beryllium bound to sulfonated hydroxybenzoquinoline, allowing up to 100 times more sensitive detection than the recommended limit for beryllium concentration in the workplace. Fluorescence increases with increasing beryllium concentration. The new procedure has been successfully tested on a variety of surfaces and is effective for the dissolution and detection of refractory beryllium oxide and siliceous beryllium in minute concentrations (ASTM D7458). The NIOSH Manual of Analytical Methods contains methods for measuring occupational exposures to beryllium. Notes References Cited sources Further reading Mroz MM, Balkissoon R, and Newman LS. "Beryllium". In: Bingham E, Cohrssen B, Powell C (eds.) Patty's Toxicology, Fifth Edition. New York: John Wiley & Sons 2001, 177–220. Walsh, KA, Beryllium Chemistry and Processing. Vidal, EE. et al. Eds. 2009, Materials Park, OH:ASM International. Beryllium Lymphocyte Proliferation Testing (BeLPT). DOE Specification 1142–2001. Washington, DC: U.S. Department of Energy, 2001. 2007, Eric Scerri,The periodic table: Its story and its significance, Oxford University Press, New York, External links ATSDR Case Studies in Environmental Medicine: Beryllium Toxicity U.S. Department of Health and Human Services It's Elemental – Beryllium MSDS: ESPI Metals Beryllium at The Periodic Table of Videos (University of Nottingham) National Institute for Occupational Safety and Health – Beryllium Page National Supplemental Screening Program (Oak Ridge Associated Universities) Historic Price of Beryllium in USA Chemical elements Alkaline earth metals Neutron moderators Nuclear materials IARC Group 1 carcinogens Chemical hazards Reducing agents Chemical elements with hexagonal close-packed structure
Beryllium
[ "Physics", "Chemistry" ]
8,911
[ "Chemical elements", "Redox", "Reducing agents", "Chemical hazards", "Materials", "Nuclear materials", "Atoms", "Matter" ]
3,397
https://en.wikipedia.org/wiki/Bridge
A bridge is a structure built to span a physical obstacle (such as a body of water, valley, road, or railway) without blocking the path underneath. It is constructed for the purpose of providing passage over the obstacle, which is usually something that is otherwise difficult or impossible to cross. There are many different designs of bridges, each serving a particular purpose and applicable to different situations. Designs of bridges vary depending on factors such as the function of the bridge, the nature of the terrain where the bridge is constructed and anchored, the material used to make it, and the funds available to build it. The earliest bridges were likely made with fallen trees and stepping stones. The Neolithic people built boardwalk bridges across marshland. The Arkadiko Bridge, dating from the 13th century BC, in the Peloponnese is one of the oldest arch bridges in existence and use. Etymology The Oxford English Dictionary traces the origin of the word bridge to an Old English word brycg, of the same meaning. The Oxford English Dictionary also notes that there is some suggestion that the word can be traced directly back to Proto-Indo-European *bʰrēw-. However, they also note that "this poses semantic problems." The origin of the word for the card game of the same name is unknown, but may be from folk etymology. History The simplest and earliest types of bridges were stepping stones. Neolithic people also built a form of boardwalk across marshes; examples of such bridges include the Sweet Track and the Post Track in England, approximately 6000 years old. Ancient people would also have used log bridges consisting of logs that fell naturally or were intentionally felled or placed across streams. Some of the first human-made bridges with significant span were probably intentionally felled trees. Among the oldest timber bridges is the Holzbrücke Rapperswil-Hurden bridge that crossed upper Lake Zürich in Switzerland; prehistoric timber pilings discovered to the west of the Seedamm causeway date back to 1523 BC. The first wooden footbridge there led across Lake Zürich; it was reconstructed several times through the late 2nd century AD, when the Roman Empire built a wooden bridge to carry transport across the lake. Between 1358 and 1360, Rudolf IV, Duke of Austria, built a 'new' wooden bridge across the lake that was used until 1878; it was approximately long and wide. On 6 April 2001, a reconstruction of the original wooden footbridge was opened; it is also the longest wooden bridge in Switzerland. The Arkadiko Bridge is one of four Mycenaean corbel arch bridges part of a former network of roads, designed to accommodate chariots, between the fort of Tiryns and town of Epidauros in the Peloponnese, in southern Greece. Dating to the Greek Bronze Age (13th century BC), it is one of the oldest arch bridges still in existence and use. Several intact, arched stone bridges from the Hellenistic era can be found in the Peloponnese. The greatest bridge builders of antiquity were the ancient Romans. The Romans built arch bridges and aqueducts that could stand in conditions that would damage or destroy earlier designs, some of which still stand today. An example is the Alcántara Bridge, built over the river Tagus, in Spain. The Romans also used cement, which reduced the variation of strength found in natural stone. One type of cement, called pozzolana, consisted of water, lime, sand, and volcanic rock. Brick and mortar bridges were built after the Roman era, as the technology for cement was lost (then later rediscovered). In India, the Arthashastra treatise by Kautilya mentions the construction of dams and bridges. A Mauryan bridge near Girnar was surveyed by James Princep. The bridge was swept away during a flood, and later repaired by Puspagupta, the chief architect of emperor Chandragupta I. The use of stronger bridges using plaited bamboo and iron chain was visible in India by about the 4th century. A number of bridges, both for military and commercial purposes, were constructed by the Mughal administration in India. Although large bridges of wooden construction existed in China at the time of the Warring States period, the oldest surviving stone bridge in China is the Zhaozhou Bridge, built from 595 to 605 AD during the Sui dynasty. This bridge is also historically significant as it is the world's oldest open-spandrel stone segmental arch bridge. European segmental arch bridges date back to at least the Alconétar Bridge (approximately 2nd century AD), while the enormous Roman era Trajan's Bridge (105 AD) featured open-spandrel segmental arches in wooden construction. Rope bridges, a simple type of suspension bridge, were used by the Inca civilization in the Andes mountains of South America, just prior to European colonization in the 16th century. The Ashanti built bridges over streams and rivers. They were constructed by pounding four large forked tree trunks into the stream bed, placing beams along these forked pillars, then positioning cross-beams that were finally covered with four to six inches of dirt. During the 18th century, there were many innovations in the design of timber bridges by Hans Ulrich Grubenmann, Johannes Grubenmann, as well as others. The first book on bridge engineering was written by Hubert Gautier in 1716. A major breakthrough in bridge technology came with the erection of the Iron Bridge in Shropshire, England in 1779. It used cast iron for the first time as arches to cross the river Severn. With the Industrial Revolution in the 19th century, truss systems of wrought iron were developed for larger bridges, but iron does not have the tensile strength to support large loads. With the advent of steel, which has a high tensile strength, much larger bridges were built, many using the ideas of Gustave Eiffel. In Canada and the United States, numerous timber covered bridges were built in the late 1700s to the late 1800s, reminiscent of earlier designs in Germany and Switzerland. Some covered bridges were also built in Asia. In later years, some were partly made of stone or metal but the trusses were usually still made of wood; in the United States, there were three styles of trusses, the Queen Post, the Burr Arch and the Town Lattice. Hundreds of these structures still stand in North America. They were brought to the attention of the general public in the 1990s by the novel, movie and play The Bridges of Madison County. In 1927, welding pioneer Stefan Bryła designed the first welded road bridge in the world, the Maurzyce Bridge which was later built across the river Słudwia at Maurzyce near Łowicz, Poland in 1929. In 1995, the American Welding Society presented the Historic Welded Structure Award for the bridge to Poland. Types of bridges Bridges can be categorized in several different ways. Common categories include the type of structural elements used, by what they carry, whether they are fixed or movable, and by the materials used. Structure types Bridges may be classified by how the actions of tension, compression, bending, torsion and shear are distributed through their structure. Most bridges will employ all of these to some degree, but only a few will predominate. The separation of forces and moments may be quite clear. In a suspension or cable-stayed bridge, the elements in tension are distinct in shape and placement. In other cases the forces may be distributed among a large number of members, as in a truss. Some Engineers sub-divide 'beam' bridges into slab, beam-and-slab and box girder on the basis of their cross-section. A slab can be solid or voided (though this is no longer favored for inspectability reasons) while beam-and-slab consists of concrete or steel girders connected by a concrete slab. A box-girder cross-section consists of a single-cell or multi-cellular box. In recent years, integral bridge construction has also become popular. Fixed or movable bridges Most bridges are fixed bridges, meaning they have no moving parts and stay in one place until they fail or are demolished. Temporary bridges, such as Bailey bridges, are designed to be assembled, taken apart, transported to a different site, and re-used. They are important in military engineering and are also used to carry traffic while an old bridge is being rebuilt. Movable bridges are designed to move out of the way of boats or other kinds of traffic, which would otherwise be too tall to fit. These are generally electrically powered. The Tank bridge transporter (TBT) has the same cross-country performance as a tank even when fully loaded. It can deploy, drop off and load bridges independently, but it cannot recover them. Double-decked bridges Double-decked (or double-decker) bridges have two levels, such as the George Washington Bridge, connecting New York City to Bergen County, New Jersey, US, as the world's busiest bridge, carrying 102 million vehicles annually; truss work between the roadway levels provided stiffness to the roadways and reduced movement of the upper level when the lower level was installed three decades after the upper level. The Tsing Ma Bridge and Kap Shui Mun Bridge in Hong Kong have six lanes on their upper decks, and on their lower decks there are two lanes and a pair of tracks for MTR metro trains. Some double-decked bridges only use one level for street traffic; the Washington Avenue Bridge in Minneapolis reserves its lower level for automobile and light rail traffic and its upper level for pedestrian and bicycle traffic (predominantly students at the University of Minnesota). Likewise, in Toronto, the Prince Edward Viaduct has five lanes of motor traffic, bicycle lanes, and sidewalks on its upper deck; and a pair of tracks for the Bloor–Danforth subway line on its lower deck. The western span of the San Francisco–Oakland Bay Bridge also has two levels. Robert Stephenson's High Level Bridge across the River Tyne in Newcastle upon Tyne, completed in 1849, is an early example of a double-decked bridge. The upper level carries a railway, and the lower level is used for road traffic. Other examples include Britannia Bridge over the Menai Strait and Craigavon Bridge in Derry, Northern Ireland. The Oresund Bridge between Copenhagen and Malmö consists of a four-lane highway on the upper level and a pair of railway tracks at the lower level. Tower Bridge in London is different example of a double-decked bridge, with the central section consisting of a low-level bascule span and a high-level footbridge. Viaducts A viaduct is made up of multiple bridges connected into one longer structure. The longest and some of the highest bridges are viaducts, such as the Lake Pontchartrain Causeway and Millau Viaduct. Multi-way bridge A multi-way bridge has three or more separate spans which meet near the center of the bridge. Multi-way bridges with only three spans appear as a "T" or "Y" when viewed from above. Multi-way bridges are extremely rare. The Tridge, Margaret Bridge, and Zanesville Y-Bridge are examples. Bridge types by use A bridge can be categorized by what it is designed to carry, such as trains, pedestrian or road traffic (road bridge), a pipeline (Pipe bridge) or waterway for water transport or barge traffic. An aqueduct is a bridge that carries water, resembling a viaduct, which is a bridge that connects points of equal height. A road-rail bridge carries both road and rail traffic. Overway is a term for a bridge that separates incompatible intersecting traffic, especially road and rail. Some bridges accommodate other purposes, such as the tower of Nový Most Bridge in Bratislava, which features a restaurant, or a bridge-restaurant which is a bridge built to serve as a restaurant. Other suspension bridge towers carry transmission antennas. Conservationists use wildlife overpasses to reduce habitat fragmentation and animal-vehicle collisions. The first animal bridges sprung up in France in the 1950s, and these types of bridges are now used worldwide to protect both large and small wildlife. Bridges are subject to unplanned uses as well. The areas underneath some bridges have become makeshift shelters and homes to homeless people, and the undertimbers of bridges all around the world are spots of prevalent graffiti. Some bridges attract people attempting suicide, and become known as suicide bridges. Bridge types by material The materials used to build the structure are also used to categorize bridges. Until the end of the 18th century, bridges were made out of timber, stone and masonry. Modern bridges are currently built in concrete, steel, fiber reinforced polymers (FRP), stainless steel or combinations of those materials. Living bridges have been constructed of live plants such as Ficus elastica tree roots in India and wisteria vines in Japan. Analysis and design Unlike buildings whose design is led by architects, bridges are usually designed by engineers. This follows from the importance of the engineering requirements; namely spanning the obstacle and having the durability to survive, with minimal maintenance, in an aggressive outdoor environment. Bridges are first analysed; the bending moment and shear force distributions are calculated due to the applied loads. For this, the finite element method is the most popular. The analysis can be one-, two-, or three-dimensional. For the majority of bridges, a two-dimensional plate model (often with stiffening beams) is sufficient or an upstand finite element model. On completion of the analysis, the bridge is designed to resist the applied bending moments and shear forces, section sizes are selected with sufficient capacity to resist the stresses. Many bridges are made of prestressed concrete which has good durability properties, either by pre-tensioning of beams prior to installation or post-tensioning on site. In most countries, bridges, like other structures, are designed according to Load and Resistance Factor Design (LRFD) principles. In simple terms, this means that the load is factored up by a factor greater than unity, while the resistance or capacity of the structure is factored down, by a factor less than unity. The effect of the factored load (stress, bending moment) should be less than the factored resistance to that effect. Both of these factors allow for uncertainty and are greater when the uncertainty is greater. Aesthetics Most bridges are utilitarian in appearance, but in some cases, the appearance of the bridge can have great importance. Often, this is the case with a large bridge that serves as an entrance to a city, or crosses over a main harbor entrance. These are sometimes known as signature bridges. Designers of bridges in parks and along parkways often place more importance on aesthetics, as well. Examples include the stone-faced bridges along the Taconic State Parkway in New York. Bridges are typically more aesthetically pleasing if they are simple in shape, the deck is thinner in proportion to its span, the lines of the structure are continuous, and the shapes of the structural elements reflect the forces acting on them. To create a beautiful image, some bridges are built much taller than necessary. This type, often found in east-Asian style gardens, is called a Moon bridge, evoking a rising full moon. Other garden bridges may cross only a dry bed of stream-washed pebbles, intended only to convey an impression of a stream. Often in palaces, a bridge will be built over an artificial waterway as symbolic of a passage to an important place or state of mind. A set of five bridges cross a sinuous waterway in an important courtyard of the Forbidden City in Beijing, China. The central bridge was reserved exclusively for the use of the Emperor and Empress, with their attendants. Bridge maintenance The estimated life of bridges varies between 25 and 80 years depending on location and material. Bridges may age hundred years with proper maintenance and rehabilitation. Bridge maintenance consisting of a combination of structural health monitoring and testing. This is regulated in country-specific engineer standards and includes an ongoing monitoring every three to six months, a simple test or inspection every two to three years and a major inspection every six to ten years. In Europe, the cost of maintenance is considerable and is higher in some countries than spending on new bridges. The lifetime of welded steel bridges can be significantly extended by aftertreatment of the weld transitions. This results in a potential high benefit, using existing bridges far beyond the planned lifetime. Bridge traffic loading While the response of a bridge to the applied loading is well understood, the applied traffic loading itself is still the subject of research. This is a statistical problem as loading is highly variable, particularly for road bridges. Load Effects in bridges (stresses, bending moments) are designed for using the principles of Load and Resistance Factor Design. Before factoring to allow for uncertainty, the load effect is generally considered to be the maximum characteristic value in a specified return period. Notably, in Europe, it is the maximum value expected in 1000 years. Bridge standards generally include a load model, deemed to represent the characteristic maximum load to be expected in the return period. In the past, these load models were agreed by standard drafting committees of experts but today, this situation is changing. It is now possible to measure the components of bridge traffic load, to weigh trucks, using weigh-in-motion (WIM) technologies. With extensive WIM databases, it is possible to calculate the maximum expected load effect in the specified return period. This is an active area of research, addressing issues of opposing direction lanes, side-by-side (same direction) lanes, traffic growth, permit/non-permit vehicles and long-span bridges (see below). Rather than repeat this complex process every time a bridge is to be designed, standards authorities specify simplified notional load models, notably HL-93, intended to give the same load effects as the characteristic maximum values. The Eurocode is an example of a standard for bridge traffic loading that was developed in this way. Traffic loading on long span bridges Most bridge standards are only applicable for short and medium spans - for example, the Eurocode is only applicable for loaded lengths up to 200 m. Longer spans are dealt with on a case-by-case basis. It is generally accepted that the intensity of load reduces as span increases because the probability of many trucks being closely spaced and extremely heavy reduces as the number of trucks involved increases. It is also generally assumed that short spans are governed by a small number of trucks traveling at high speed, with an allowance for dynamics. Longer spans on the other hand, are governed by congested traffic and no allowance for dynamics is needed. Calculating the loading due to congested traffic remains a challenge as there is a paucity of data on inter-vehicle gaps, both within-lane and inter-lane, in congested conditions. Weigh-in-Motion (WIM) systems provide data on inter-vehicle gaps but only operate well in free flowing traffic conditions. Some authors have used cameras to measure gaps and vehicle lengths in jammed situations and have inferred weights from lengths using WIM data. Others have used microsimulation to generate typical clusters of vehicles on the bridge. Bridge vibration Bridges vibrate under load and this contributes, to a greater or lesser extent, to the stresses. Vibration and dynamics are generally more significant for slender structures such as pedestrian bridges and long-span road or rail bridges. One of the most famous examples is the Tacoma Narrows Bridge that collapsed shortly after being constructed due to excessive vibration. More recently, the Millennium Bridge in London vibrated excessively under pedestrian loading and was closed and retrofitted with a system of dampers. For smaller bridges, dynamics is not catastrophic but can contribute an added amplification to the stresses due to static effects. For example, the Eurocode for bridge loading specifies amplifications of between 10% and 70%, depending on the span, the number of traffic lanes and the type of stress (bending moment or shear force). Vehicle-bridge dynamic interaction There have been many studies of the dynamic interaction between vehicles and bridges during vehicle crossing events. Fryba did pioneering work on the interaction of a moving load and an Euler-Bernoulli beam. With increased computing power, vehicle-bridge interaction (VBI) models have become ever more sophisticated. The concern is that one of the many natural frequencies associated with the vehicle will resonate with the bridge's first natural frequency. The vehicle-related frequencies include body bounce and axle hop but there are also pseudo-frequencies associated with the vehicle's speed of crossing and there are many frequencies associated with the surface profile. Given the wide variety of heavy vehicles on road bridges, a statistical approach has been suggested, with VBI analyses carried out for many statically extreme loading events. Bridge failures The failure of bridges is of special concern for structural engineers in trying to learn lessons vital to bridge design, construction and maintenance. The failure of bridges first assumed national interest in Britain during the Victorian era when many new designs were being built, often using new materials, with some of them failing catastrophically. In the United States, the National Bridge Inventory tracks the structural evaluations of all bridges, including designations such as "structurally deficient" and "functionally obsolete". Bridge health monitoring There are several methods used to monitor the condition of large structures, like bridges. Many long-span bridges are now routinely monitored with a range of sensors, including strain transducers, accelerometers, tiltmeters, and GPS. Accelerometers have the advantage that they are inertial, i.e., they do not require a reference point to measure from. This is often a problem for distance or deflection measurement, especially if the bridge is over water. Crowdsourcing bridge conditions by accessing data passively captured by cell phones, which routinely include accelerometers and GPS sensors, has been suggested as an alternative to including sensors during bridge construction and an augment for professional examinations. An option for structural-integrity monitoring is "non-contact monitoring", which uses the Doppler effect (Doppler shift). A laser beam from a Laser Doppler Vibrometer is directed at the point of interest, and the vibration amplitude and frequency are extracted from the Doppler shift of the laser beam frequency due to the motion of the surface. The advantage of this method is that the setup time for the equipment is faster and, unlike an accelerometer, this makes measurements possible on multiple structures in as short a time as possible. Additionally, this method can measure specific points on a bridge that might be difficult to access. However, vibrometers are relatively expensive and have the disadvantage that a reference point is needed to measure from. Snapshots in time of the external condition of a bridge can be recorded using Lidar to aid bridge inspection. This can provide measurement of the bridge geometry (to facilitate the building of a computer model) but the accuracy is generally insufficient to measure bridge deflections under load. While larger modern bridges are routinely monitored electronically, smaller bridges are generally inspected visually by trained inspectors. There is considerable research interest in the challenge of smaller bridges as they are often remote and do not have electrical power on site. Possible solutions are the installation of sensors on a specialist inspection vehicle and the use of its measurements as it drives over the bridge to infer information about the bridge condition. These vehicles can be equipped with accelerometers, gyrometers, Laser Doppler Vibrometers and some even have the capability to apply a resonant force to the road surface to dynamically excite the bridge at its resonant frequency. Visual index See also Air draft Architectural engineering Bridge chapel Bridge tower Bridge to nowhere Bridges Act BS 5400 Causeway Coal trestle Covered bridges Cross-sea traffic ways Culvert Deck Devil's Bridge Footbridge Jet bridge Landscape architecture Megaproject Military bridges Orphan bridge Outline of bridges Overpass Pier (bridge structure) Pontoon bridge Rigid-frame bridge Structure gauge Transporter bridge Tensegrity Trestle bridge Tunnel References Further reading Whitney, Charles S. Bridges of the World: Their Design and Construction. Mineola, NY: Dover Publications, 2003. (Unabridged republication of Bridges : a study in their art, science, and evolution. 1929.) External links Digital Bridge: Bridges of the Nineteenth Century , a collection of digitized books at Lehigh University Structurae – International Database and Gallery of Engineerings Structures with over 10000 Bridges. U.S. Federal Highway Administration Bridge Technology The Museum of Japanese Timber Bridges Fukuoka University "bridge-info.org": site for bridges Transport buildings and structures Articles containing video clips Infrastructure Structural engineering
Bridge
[ "Engineering" ]
5,027
[ "Structural engineering", "Construction", "Civil engineering", "Bridges", "Infrastructure" ]